Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 16.0 years
40 - 60 Lacs
Chennai
Hybrid
What you will do: Successful candidates will demonstrate excellent skill and maturity, be self-motivated as well as team-oriented, and have the ability to support the development and implementation of end-to-end ML-enabled software solutions to meet the needs of internal stakeholders. Those who will excel in this role will be those who listen with an ear to the overarching goal, not just the immediate concern that started the query. They will be able to show their recommendations are contextually grounded in an understanding of the practical problem, the data, and theory as well as what product and software solutions are feasible and desirable. The core responsibilities of this role are: Leading a team of machine learning engineers to build, automate, deploy and scale ML models and pipelines in production. Enact ML and software development best practices for the team to follow. Monitoring performance and microservice health status of models in production. Partner with external stakeholders to prioritize, scope, and deliver ML solutions Perform additional duties as assigned. What you will need: Graduate education in a computationally intensive domain. 7+ years of prior relevant work or lab experience in ML projects/research 2+ years as tech lead or team management experience Advanced proficiency with Python and SQL (BigQuery/MySQL). Experience with building scalable REST APIs (Flask, FastAPI) Experience with building data pipelines (Airflow, Kubeflow, sklearn pipelines, etc.) Extensive knowledge of ML frameworks and libraries like Feast, XGBoost, PyTorch etc. Familiarity with cloud platforms and tools for scalable ML operations (e.g., AWS, GCP, Kubernetes). Preferred Qualifications: Experience with ML model development lifecycles Prior contributions to data architecture or software architecture projects Role & responsibilities.
Posted 1 month ago
15.0 - 24.0 years
25 - 40 Lacs
Mumbai
Work from Office
Job Scope: The Senior DevOps Engineer will strategically oversee the DevOps practices, infrastructure management, and operational processes for two major projects the AI No-Code Platform and AI Inferencing Platform. The role ensures robust, secure, scalable, and high-performance infrastructure aligning with organizational goals and industry best practices. Job Responsibilities: 1. Design, deploy, and manage scalable infrastructure across cloud and hybrid environments (AWS, Azure, GCP, Yotta). 2. Implement and maintain continuous integration and delivery (CI/CD) pipelines. 3. Manage containerization and orchestration using Docker and Kubernetes. 4. Ensure security, compliance, and vulnerability management (SOC2, GDPR, ISO27001). 5. Oversee system monitoring, logging, alerting, and incident response. 6. Optimize infrastructure performance and cost-efficiency. 7. Implement disaster recovery, backup strategies, and business continuity planning. 8. Manage cloud cost, budgeting, and forecasting. 9. Mentor and provide leadership to DevOps and infrastructure team members. 10. Collaborate cross-functionally with developers, QA, product management, and senior stakeholders. 11. Document infrastructure architecture, configurations, processes, and maintain operational reporting. Good to have skills: Certifications in AWS, Azure, GCP, Kubernetes (CKA/CKAD) Experience in GPU-intensive workloads and AI/ML infrastructure Proficiency in Infrastructure as Code (IaC) tools like Terraform, CloudFormation Familiarity with MLOps practices and AI model deployment Behavioral Attributes: Action Orientation & Accountability: Demonstrates proactivity, ownership, and responsibility for outcomes. Reliably delivers results despite challenges. Art of Skillful Conversation: Communicates clearly, effectively, and persuasively. Engages stakeholders to facilitate productive dialogue and decision-making. Creativity & Problem Solving: Innovates and provides out-of-the-box solutions. Quickly identifies issues and develops efficient and effective resolutions. Business Acumen: Understands strategic objectives and aligns technical solutions with business goals. Makes decisions considering organizational impact. Dealing with Ambiguity: Effectively handles uncertain or unclear situations with composure. Adapts and thrives in dynamic and rapidly changing environments. Learning on the Fly: Rapidly assimilates new information, technologies, and methodologies. Continuously seeks opportunities for personal growth. Building Trust: Establishes credibility and trustworthiness through transparency and consistent performance. Fosters strong interpersonal relationships. Customer Focus: Consistently prioritizes customer needs and ensures solutions meet user expectations. Demonstrates empathy and responsiveness. Intellectual Horsepower: Quickly grasps complex concepts and identifies essential insights. Capable of analytical thinking and clear strategic vision. Prioritizing, Planning & Organizing: Efficiently organizes tasks and allocates resources to meet objectives. Balances competing priorities effectively. Process Quality Excellence: Consistently adheres to and improves upon established processes. Ensures high-quality standards and continuous improvement. Listening, Sensing, Observing: Attentively gathers information and feedback from diverse sources. Understands nuanced messages and responds appropriately. Building Collaborative Relationships: Effectively collaborates across teams to achieve shared goals. Encourages teamwork, cooperation, and mutual support. Qualification and Experience: Bachelors/Masters degree in Computer Science, IT, or related fields Minimum 15+ years of experience in infrastructure management and DevOps Extensive experience with cloud platforms, CI/CD tools, and Kubernetes Proven experience in infrastructure security and compliance frameworks Strong scripting and automation skills (Bash, Python, Ansible)
Posted 1 month ago
5.0 - 10.0 years
22 - 30 Lacs
Pune
Hybrid
We are looking for a Machine Learning Engineer with expertise in MLOps (Machine Learning Operations) or LLMOps (Large Language Model Operations) to design, deploy, and maintain scalable AI/ML systems. You will work on automating ML workflows, optimizing model deployment, and managing large-scale AI applications, including LLMs (Large Language Models) , ensuring they run efficiently in production. Key Responsibilities: Design and implement end-to-end MLOps pipelines for training, validation, deployment, monitoring, and retraining of ML models. Optimize and fine-tune large language models (LLMs) for various applications, ensuring performance and efficiency. Develop CI/CD pipelines for ML models to automate deployment and monitoring in production. Monitor model performance, detect drift , and implement automated retraining mechanisms. Work with cloud platforms ( AWS, GCP, Azure ) and containerization technologies ( Docker, Kubernetes ) for scalable deployments. Implement best practices in data engineering , feature stores, and model versioning. Collaborate with data scientists, engineers, and product teams to integrate ML models into production applications. Ensure compliance with security, privacy, and ethical AI standards in ML deployments. Optimize inference performance and cost of LLMs using quantization, pruning, and distillation techniques . Deploy LLM-based APIs and services, integrating them with real-time and batch processing pipelines. Key Requirements: Technical Skills: Strong programming skills in Python, with experience in ML frameworks ( TensorFlow, PyTorch, Hugging Face, JAX ). Experience with MLOps tools (MLflow, Kubeflow, Vertex AI, SageMaker, Airflow). Deep understanding of LLM architectures , prompt engineering, and fine-tuning. Hands-on experience with containerization (Docker, Kubernetes) and orchestration tools . Proficiency in cloud services (AWS/GCP/Azure) for ML model training and deployment. Experience with monitoring ML models (Prometheus, Grafana, Evidently AI). Knowledge of feature stores (Feast, Tecton) and data pipelines (Kafka, Apache Beam). Strong background in distributed computing (Spark, Ray, Dask) . Soft Skills: Strong problem-solving and debugging skills. Ability to work in cross-functional teams and communicate complex ML concepts to stakeholders. Passion for staying updated with the latest ML and LLM research & technologies . Preferred Qualifications: Experience with LLM fine-tuning , Reinforcement Learning with Human Feedback ( RLHF ), or LoRA/PEFT techniques . Knowledge of vector databases (FAISS, Pinecone, Weaviate) for retrieval-augmented generation ( RAG ). Familiarity with LangChain, LlamaIndex , and other LLMOps-specific frameworks. Experience deploying LLMs in production (ChatGPT, LLaMA, Falcon, Mistral, Claude, etc.) .
Posted 1 month ago
7.0 - 12.0 years
25 - 32 Lacs
Gurugram, Jaipur, Jodhpur
Hybrid
Role & responsibilities • Lead the technical architecture and design of end-to-end solutions across cloud and hybrid environments. • Translate complex business requirements into scalable, performant, and secure technical designs. • Collaborate closely with engineering, product, and business teams to align architectural goals with delivery objectives. • Provide technical leadership and mentorship to development teams across technologies including backend, frontend, DevOps, and data layers. • Define and enforce coding standards, design principles, architectural best practices, and governance processes. • Drive cloud-native design on Azure, incorporating services such as Azure App Services, Azure Functions, etc. • Actively contribute to POCs, reference architectures, reusable templates, and solution accelerators. • Stay updated with emerging tools, technologies, and frameworks to recommend improvements and modernize existing systems. Preferred candidate profile Minimum Qualifications • BE/B Tech/MCA or equivalent degree in technical discipline. • Excellent written and verbal communication skills, with the ability to articulate complex technical concepts clearly to varied audiences. Preferred Qualifications/ Skills • Hands-on experience in software development. • Strong expertise in designing and implementing solutions using Python. • Experience in AI/ML model deployment, MLOps, or GenAI integration. • Deep understanding of Azure cloud ecosystem. • Understanding of microservices architecture, API and integration patterns. • Familiarity with databases (SQL/NoSQL), messaging systems, caching, and distributed systems. • Strong problem-solving ability and experience with troubleshooting and performance optimization. • Knowledge/experience of FnA domain (R2R, S2P, O2C) is a strong plus.
Posted 1 month ago
8.0 - 10.0 years
12 - 18 Lacs
Pune
Work from Office
Role Overview As a Technical Project Manager, you will be responsible for planning, coordinating, and delivering complex AI/ML and cloud-native solutions across multiple clients. You will work closely with engineering, product, and client teams to ensure outcomes are delivered on time, within scope, and with high quality. This is a hands-on leadership role for someone who can design delivery blueprints, steer execution under ambiguity, and manage crises with composure. Key Responsibilities Project Ownership: Manage end-to-end project delivery from scoping to deployment and post-launch support across AI/ML, DevOps, or software engineering engagements. Solution Design: Translate business problems into solution blueprints. Work with technical leads to ensure detailed technical architecture aligns with design goals. Execution Leadership: Ensure the team follows delivery best practices including Agile or hybrid models. Monitor sprint performance, mitigate risks, and maintain delivery cadence. Stakeholder Communication: Act as the primary communication bridge between clients and internal teams. Present updates, handle escalations, and ensure client satisfaction. Technology Awareness: Stay updated with emerging GenAI, ML Ops, automation, and DevOps trends. Actively bring in new practices and ideas to improve delivery. Crisis Management: Step in during escalations or delivery slippages. Replan, reassign, and realign teams to bring execution back on track quickly and calmly. Mandatory Qualities Hard Working: Willing to go the extra mile when needed, especially in client-facing high-stakes environments. Inquisitive: Passionate about learning new technologies, especially in AI/GenAI, and applying them to real-world problems. Execution-Oriented: Strong at following through on delivery commitments and aligning teams to the design plan. Excellent Communicator: Able to clearly explain complex ideas and maintain clarity with multiple stakeholders. Crisis-Ready Leader: Proven ability to handle ambiguity, firefight during challenges, and bring stability to teams under pressure. Preferred Qualifications Background in AI/ML or DevOps project environments Prior experience in IT services or consulting roles Familiarity with Agile, Kanban, and DevOps practices Engineering background (B.Tech/MCA) preferred Why Join Us Work with cutting-edge technologies like GenAI, MLOps, and AI agents Drive real transformation across sectors like BFSI, healthcare, and manufacturing Collaborative and entrepreneurial team culture Opportunity to grow into leadership roles across delivery and strategy
Posted 1 month ago
5.0 - 9.0 years
25 - 30 Lacs
Chennai
Work from Office
Key Responsibilities: Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink. Build scalable and automated MLOps pipelines for model training, validation, and deployment using AWS SageMaker and related services. Implement and manage Infrastructure as Code (IaC) using Terraform for AWS provisioning and maintenance. Collaborate with ML, Data Science, and DevOps teams to ensure reliable and efficient model deployment workflows. Optimize data storage and retrieval strategies for both structured and unstructured large-scale datasets. Integrate and transform data from multiple sources into data lakes and data warehouses. Monitor, troubleshoot, and improve performance of cloud-native data systems in a fast-paced production setup. Ensure compliance with data governance, privacy, and security standards across all data operations. Document data engineering workflows and architectural decisions for transparency and maintainability. Required Skills & Qualifications: 5+ Years of experience as Data Engineer or in similar role Proven experience in building data pipelines and streaming applications using Apache Kafka and Apache Flink. Strong ETL development skills, with deep understanding of data modeling and data architecture in large-scale environments. Hands-on experience with AWS services, including SageMaker, S3, Glue, Lambda, and CloudFormation or Terraform. Proficiency in Python and SQL; knowledge of Java is a plus, especially for streaming use cases. Strong grasp of MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. Deep knowledge of IaC tools, particularly Terraform, for automating cloud infrastructure. Excellent analytical and problem-solving abilities, especially with regard to data processing and deployment issues. Agile mindset with experience working in fast-paced, iterative development environments. Strong communication and team collaboration skills.
Posted 1 month ago
8.0 - 10.0 years
4 - 8 Lacs
Noida
Work from Office
Mandatory Skills & Experience : - Expertise in designing and optimizing machine-learning operations, with a preference for LLM Ops.- Proficient in Data Science, Machine Learning, Python, SQL, Linux/Unix shell scripting.- Experience on Large Language Models and Natural Language Processing (NLP), and experience with researching, training, and fine-tuning LLMs. Contribute towards fine tune Transformer models for optimal performance in NLP tasks, if required. Implement and maintain automated testing and deployment processes for machine learning models w.r.t LLMOps. Implement version control, CI/CD pipelines, and containerization techniques to streamline ML and LLM workflows. Develop and maintain robust monitoring and alerting systems for generative AI models ensuring proactive identification and resolution of issues. Research or engineering experience in deep learning with one or more of the following : generative models, segmentation, object detection, classification, model optimisations. Experience implementing RAG frameworks as part of available ready products. Experience in setting up the infrastructure for the latest technology such as Kubernetes, Serverless, Containers, Microservices etc. Experience in scripting/programming to automate deployments and testing, working on tools like Terraform and Ansible. Scripting languages like Python, bash, YAML etc. Experience on CI/CD opensource and enterprise tool sets such as Argo CD, and Jenkins (others like Jenkins X, Circle CI, Argo CD, Tekton, Travis, Concourse an advantage). Experience with the GitHub/DevOps Lifecycle. Experience in Observability solutions (Prometheus, EFK stacks, ELK stacks, Grafana, Dynatrace, AppDynamics). Experience in at least one of the clouds for example Azure/AWS/GCP. Significant experience on microservices based, container based or similar modern approaches of applications and workloads. You have exemplary verbal and written communication skills (English). Able to interact and influence at the highest level, you will be a confident presenter and speaker, able to command the respect of your audience. Desired Skills & Experience : Bachelor level technical degree or equivalent experience; Computer Science, Data Science, or Engineering background preferred; Master's Degree desired. Experience in LLM Ops or related areas, such as DevOps, data engineering, or ML infrastructure. Hands on experience in deploying and managing machine learning and large language model pipelines in cloud platforms (i.e., AWS, Azure) for ML workloads. Familiar with data science, machine learning, deep learning, and natural language processing concepts, tools, and libraries such as Python, TensorFlow, PyTorch, NLTK etc. Experience in using retrieval augmented generation and prompt engineering techniques to improve the model's quality and diversity to improve operations efficiency. Proven experience in developing and fine tuning Language Models (LLMs). Stay up to date with the latest advancements in Generative AI, conduct research, and explore innovative techniques to improve model quality and efficiency. The perfect candidate will already be working within a System Integrator, Consulting or Enterprise organisation with 8+ years of experience in a technical role within the Cloud domain. Deep understanding of core practices including SRE, Agile, Scrum, XP and Domain Driven Design. Familiarity with the CNCF open source community. Enjoy working in a fast paced and dynamic environment using the latest technologies.
Posted 1 month ago
6.0 - 11.0 years
12 - 22 Lacs
Noida, Chennai, Bengaluru
Work from Office
Role Summary We are seeking a skilled and hands-on Senior Artificial Intelligence & Machine Learning (AI/ML) Engineer to build and productionize AI solutionsincluding fine-tuning large language models (LLMs), implementing Retrieval-Augmented Generation (RAG) workflows, multi-agent applications and MLOps pipelines. This role focuses on individual technical contribution and requires close collaboration with solution architects, AIML leads, and fellow engineers to translate business use cases into scalable, secure, cloud-native AI services. The ideal candidate will bring deep technical expertise across the AI/ML lifecycle—from prototyping to deployment—while contributing to a culture of engineering excellence through peer reviews, documentation, and platform innovation. They will play a critical role in delivering robust, high-performance AI systems in partnership with the broader AI/ML team. Key Responsibilities Model Development & Optimization Fine-tune foundation models (e.g., GPT-4, Llama 3). Implement prompt engineering and basic parameter-efficient tuning (e.g., LoRA). Conduct model evaluation for quality, bias, and hallucination; analyze results and suggest improvements. RAG & Agentic Systems (Exposure, Not Ownership) Assist in building RAG pipelines: Participate in integrating and embedding generation, vector stores (e.g., FAISS, pgvector), and retrieval/ranking components. Work with multi-agent frameworks (e.g., LangChain, Crew AI) Production Engineering / MLOps Contribute to CI/CD pipelines for model training and deployment (e.g., GitHub Actions, SageMaker Pipelines). Help automate monitoring for latency, drift, and cost; assist in lineage tracking (e.g., MLflow). Containerize services with Docker and assist in orchestration (e.g., Kubernetes/EKS/GKE) Data & Feature Engineering Build and maintain data pipelines for collection, cleansing, and feature generation (e.g., Airflow, Spark). Implement basic data versioning and assist with synthetic data generation as needed Code Quality & Collaboration Participate in design and code reviews. Contribute to testing (unit, integration, guardrail/hallucination tests) and documentation. Share knowledge through sample notebooks and internal sessions Security, Compliance, Performance Follow secure coding and Responsible AI guidelines. Assist in optimizing inference throughput and cost (e.g., quantization, batching) under guidance. Ensure SLAs are met and contribute to system auditability Technology Stack Programming Languages & Frameworks Python (expert) JavaScript/Go/TypeScript (nice-to-have) Strong knowledge of libraries such as Scikit-learn, Pandas, NumPy, XGBoost, LightGBM, TensorFlow, PyTorch. PyTorch, TensorFlow/Keras, Hugging Face Transformers/PEFT, LangChain/LlamaIndex, Ray/PyTorch Lightning, FastAPI/Flask Experience working with RESTful APIs, authentication (OAuth, API keys), and pagination Cloud & DevOps Expertise in one or more cloud vendors like AWS, GCP, Azure Containers (Docker), Orchestration (Kubernetes, EKS/GKE/AKS) MLOps Databases Relational: PostgreSQL, MySQL NoSQL: MongoDB / DynamoDB Vector Stores: FAISS / pgvector / Pinecone / OpenSearch / Milvus / Weaviate RAG Components Document loaders/parsers, text splitters (recursive/semantic), embeddings (OpenAI, Cohere, Vertex AI), hybrid/BM25 retrievers, rerankers (Cross-Encoder) Multi-Agent Frameworks Crew AI / AutoGen / LangGraph / MetaGPT / Haystack Agents, planning & tool-use patterns Testing & Quality Unit/integration testing (pytest), guardrails Qualifications 7–10years total software/ML engineering experience, including 3+years delivering ML models or GenAI systems to production. Proven track record building and optimising RAG or LLMpowered applications at scale Proficiency in Python and ML frameworks (PyTorch, TensorFlow) and in cloudnative deployment (AWS/GCP/Azure). Handson experience with vector databases, embeddings, and prompt engineering. Experience in regulated industries (Fintech, Healthcare, eCommerce) is a plus. Experience with multiagent frameworks (CrewAI, AutoGen, LangGraph). Certifications such as AWS CertifiedMachineLearning Specialty / AzureAIEngineer / GoogleProfessionalMachineLearningEngineer. Bachelor’s degree in Computer Science, Data Science, Engineering or related discipline (Master’s preferred). Soft Skills & Leadership Attributes Ownership mindset: drives features from design through deployment and monitoring. Clear communicator: explains technical tradeoffs to stakeholders, writes concise docs, and updates project artefacts. Collaboration & mentorship: pairs with junior engineers, shares knowledge in brownbag sessions, gives constructive PR feedback. Continuous learning: tracks latest GenAI research, evaluates new tooling, and proposes incremental improvements.
Posted 1 month ago
10.0 - 18.0 years
30 - 45 Lacs
Pune, Gurugram, Bengaluru
Work from Office
Role overview: Were building a next-gen LLMOps team at Fractal to industrialize GenAI implementation and shape the future of GenAI engineering. This is a hands-on technical leadership role for AI engineers with strong ML and DevOps skills — ideal for those who love building scalable systems from the ground up. You will be designing, deploying, and scaling GenAI and Agentic AI applications with robust lifecycle automation and observability. Required Qualifications: 10 - 14 years of experience in working on ML projects that includes product building mindset, strong hands on skills, technical leadership, leading development teams Model development, training, deployment at scale, monitoring performance for production use cases Strong knowledge on Python, Data Engineering, FastAPI, NLP Knowledge on Langchain, Llamaindex, Langtrace, Langfuse, LLM evaluation, MLFlow, BentoML Should have worked on proprietary and open-source LLMs Experience on LLM fine tuning including PEFT/CPT Experience in creating Agentic AI workflows using frameworks like CrewAI, Langraph, AutoGen, Symantec Kernel Experience in performance optimization, RAG, guardrails, AI governance, prompt engineering, evaluation, and observability Experience in GenAI application deployment on cloud and on-premises at scale for production using DevOps practices Experience in DevOps and MLOps Good working knowledge on Kubernetes and Terraform Experience in minimum one cloud: AWS / GCP / Azure to deploy AI services Team player with excellent communication and presentation skills Must have skills: Product thinking that includes ideation, prototyping, and scale internal accelerators for LLMOps Architect and build scalable LLMOps platforms for enterprise-grade GenAI systems Design and manage end-to-end LLM pipelines from data ingestion and embedding to evaluation and inference Drive LLM-specific infrastructure : memory management, token control, prompt chaining, and context optimization Lead scalable deployment frameworks for LLMs using Kubernetes and GPU-aware scaling Build agentic AI operations capabilities including agent evaluation, observability, orchestration and reflection loops Guardrails & Observability: Implement output filtering, context-aware routing, evaluation harnesses, metrics logging, and incident response Platform Automation for LLMOps: Drive end-to-end automation with Docker, Kubernetes, GitOps, DevOps, Terraform, etc. Product Thinking : Ideate, prototype, and scale internal accelerators and reusable components for LLMOps GenAI Engineering : Productionize LLM-powered applications with modular, reusable, and secure patterns Pipeline Architecture : Create evaluation pipelines — including prompt orchestration, feedback loops, and fine-tuning workflows Prompt & Model Management : Design systems for versioning, AI governance, automated testing, and prompt quality scoring Scalable Deployment : Architect cloud-native and hybrid deployment strategies for large-scale inference Guardrails & Observability : Implement output filtering, context-aware routing, evaluation harnesses, metrics logging, and incident response DevOps & Platform Automation : Drive end-to-end automation with Docker, Kubernetes, GitOps, Terraform, etc. Must-Have Technical Skills LLMOps frameworks : LangChain, MLflow, BentoML, Ray, Truss, FastAPI Prompt evaluation and scoring systems : OpenAI evals, Ragas, Rebuff, Outlines Cloud-native deployment : Kubernetes, Helm, Terraform, Docker, GitOps ML pipeline : Airflow, Prefect, Feast, Feature Store Data stack : Spark/Flink, Parquet/Delta, Lakehouse patterns Cloud : Azure ML, GCP Vertex AI, AWS Bedrock/SageMaker Languages : Python (must), Bash, YAML, Terraform HCL (preferred)
Posted 1 month ago
4.0 - 8.0 years
9 - 15 Lacs
Ahmedabad
Work from Office
Data Scientist—Generative AI: Design, finetune & deploy LLMs, GANs & diffusion models. Build RAG pipelines with FAISS/Pinecone; use Python, PyTorch/TensorFlow, and Hugging Face. Create ML pipelines, ensure model ethics, and deploy via cloud.
Posted 1 month ago
5.0 - 10.0 years
0 - 1 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Contractual (Project-Based) Notice Period: Immediate - 15 Days Fill this form: https://forms.office.com/Pages/ResponsePage.aspx?id=hLjynUM4c0C8vhY4bzh6ZJ5WkWrYFoFOu2ZF3Vr0DXVUQlpCTURUVlJNS0c1VUlPNEI3UVlZUFZMMC4u Resume- shweta.soni@panthsoftech.com
Posted 1 month ago
4.0 - 5.0 years
25 - 30 Lacs
Indore, Surat, Mumbai (All Areas)
Work from Office
Job Title: Data Scientist Location: Mumbai/Indore/Surat Job Type: Full-time About Us: Everestek is a forward-thinking organization specializing in designing and deploying cutting-edge AI solutions. From powerful recommendation engines and intuitive chatbots to state-of-the-art generative AI and deep learning applications, we empower businesses to harness the full potential of their data. Our mission is to create transformative, data-driven products that enable our clients to innovate faster, personalize their offerings, and stay ahead in an ever-evolving tech landscape. At Everestek, we foster a culture of collaboration, creativity, and continuous learning. Our dynamic team of data scientists, engineers, and innovators is dedicated to pushing the boundaries of what's possible in artificial intelligence. If youre passionate about solving complex problems and want to be part of an organization that values experimentation and impact, Everestek is the place for you. Key Responsibilities: Design, build, and optimize machine/deep learning models, including predictive models, recommendation systems, and Gen-AI-based solutions. Develop and implement advanced AI agents capable of performing autonomous tasks, decision-making, and executing requirement-specific workflows. Prompt engineer to develop new and enhance existing Gen-AI applications. (Chatbots, RAG) Perform advanced data manipulation, cleansing, and analysis to extract actionable insights from structured and unstructured data. Create scalable and efficient recommendation systems that enhance user personalization and engagement. Design and deploy AI-driven chatbots and virtual assistants, focusing on natural language understanding and contextual relevance. Implement and optimize machine and deep learning models for NLP tasks, including text classification, sentiment analysis, and language generation. Explore, develop, and deploy state-of-the-art technologies for AI agents, integrating them with broader enterprise systems. Collaborate with cross-functional teams to gather business requirements and deliver AI-driven solutions tailored to specific use cases. Automate workflows using advanced AI tools and frameworks to increase efficiency and reduce manual interventions. Stay informed about cutting-edge advancements in AI, machine learning, NLP, and Gen AI applications, and assess their relevance to the organization. Effectively communicate technical solutions and findings to both technical and non-technical stakeholders. Qualifications and Skills: Required: At least 3 years of experience working with data sciences. Python proficiency and hands-on experience with libraries like (Pandas, Numpy, Matplotlib, NLTK, Sklearn, and Tensorflow) Proven experience in designing and implementing AI agents and autonomous systems. Strong expertise in machine learning, including predictive modeling and recommendation systems. Hands-on experience with deep learning frameworks like TensorFlow or PyTorch, focusing on NLP and AI-driven solutions. Advanced understanding of natural language processing (NLP) techniques and tools, including transformers like BERT, GPT, or similar models including open-source LLMs. Experience in prompt engineering for AI models to enhance functionality and adaptability. Strong knowledge of cloud platforms (AWS) for deploying and scaling AI models. Familiarity with AI agent frameworks like LangChain, OpenAI APIs, or other agent-building tools. Advanced skills in Relational databases [Postgres], Vector Database, querying, analytics, semantic search, and data manipulation. Strong problem-solving and critical-thinking skills, with the ability to handle complex technical challenges. Hands-on experience working with API frameworks like Flask, FastAPI, etc. Git and GitHub proficiency. Excellent communication & documentation skills to articulate complex ideas to diverse audiences. Preferred: Hands-on experience building and deploying conversational AI, chatbots, and virtual assistants. Familiarity with MLOps pipelines and CI/CD for AI/ML workflows. Experience with reinforcement learning or multi-agent systems. BE (OR Master) in Computer Science, Data Science, or Artificial Intelligence. Agentic AI systems using frameworks like Langchain or similar.
Posted 1 month ago
2.0 - 7.0 years
40 - 45 Lacs
Chandigarh
Remote
We are seeking a highly skilled and motivated Data Science Engineer with strong experience in AI/ML, Data Engineering, and cloud infrastructure (AWS). You will play a critical role in shaping intelligent, scalable, and data-driven solutions that deliver meaningful impact for our clients. As part of a cross-functional team, you will design and build end-to-end data pipelines, develop predictive models, and deploy production-ready data products across a variety of industries. Key Responsibilities: Collaborate with engineering teams, data scientists, and clients to build and deploy impactful data products. Design, develop, and maintain scalable and cost-efficient data pipelines on AWS. Build and integrate AI/ML models to enhance product intelligence and automation. Develop backend and data components using Python, SQL, and PySpark. Leverage AI frameworks and tools to deploy models in production environments. Work directly with stakeholders to understand and translate their data needs into technical solutions. Implement and manage cloud infrastructure including PostgreSQL, Redshift, Airflow, and MongoDB. Follow best practices in data architecture, model training, testing, and performance optimization. Required Qualifications: Experience: Minimum 3 years in Data Engineering, Software Development, or related roles. At least 2 years of hands-on experience applying AI/ML algorithms for real-world use cases. Proven experience in building and managing AWS-based cloud infrastructure. Strong background in data analysis, mining, and model interpretability. Technical Skills: Programming Languages: Python, SQL, PySpark Frameworks/Tools: Airflow, Django (optional), Scikit-learn, TensorFlow or PyTorch Databases: PostgreSQL, Redshift, MongoDB Experience with real-time and batch data workflows Soft Skills: Strong problem-solving and logical reasoning abilities Excellent communication skills for client interactions and internal collaboration Ability to work in a fast-paced, dynamic environment Nice to Have: Experience with MLOps and CI/CD for ML pipelines Exposure to BI tools like Tableau, Power BI, or Metabase Knowledge of data security and governance on AWS
Posted 1 month ago
2.0 - 5.0 years
5 - 12 Lacs
Hyderabad
Work from Office
Job Title: AI/ML Engineer GenAI & MLOps Experience: 2 to 5 Years Location: Hyderabad (Work From Office) Employment Type: Full-Time About the Role: We are looking for a passionate and skilled AI/ML Engineer with experience in Generative AI, Machine Learning Operations (MLOps), and core AI/ML development. You will play a key role in designing, developing, and deploying intelligent systems and scalable ML pipelines in a production environment. Key Responsibilities: Design and implement machine learning models, especially in NLP, computer vision, and generative AI use cases (LLMs, diffusion models, etc.) Fine-tune and deploy transformer-based models (e.g., BERT, GPT, LLaMA) using open-source and commercial frameworks. Build and automate ML pipelines using MLOps tools such as MLflow, Kubeflow, or SageMaker. Work with cross-functional teams to deploy models to production with CI/CD and monitoring. Manage datasets, labeling strategies, and data versioning using tools like DVC or Weights & Biases. Conduct experiments, model evaluation, and performance tuning. Collaborate with backend engineers to integrate AI models into applications or APIs. Required Skills & Qualifications: 2–5 years of hands-on experience in AI/ML model development and deployment Strong knowledge of Python and ML libraries like TensorFlow, PyTorch, scikit-learn Experience with GenAI frameworks (e.g., Hugging Face Transformers, LangChain, OpenAI API) Exposure to MLOps practices and tools: Docker, MLflow, Airflow, FastAPI, Kubernetes, etc. Familiarity with cloud platforms (AWS, GCP, or Azure) Understanding of LLM fine-tuning, embeddings, vector stores, and prompt engineering Bachelor's or Master’s degree in Computer Science, AI, Data Science, or related field Good to Have: Knowledge of RAG (Retrieval-Augmented Generation) Experience with secure model deployment (RBAC, endpoint auth) Contributions to open-source AI/ML projects
Posted 1 month ago
7.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
Hiring a Senior GenAI Engineer with 712 years of experience in Python, Machine Learning, and Large Language Models (LLMs) for a 6-month engagement based in Gurugram. This hands-on role involves building intelligent systems using Langchain and RAG, developing agent workflows, and defining technical roadmaps. The ideal candidate will be proficient in LLM architecture, prompt engineering, vector databases, and cloud platforms (AWS, Azure, GCP). The position demands strong collaboration skills, a system design mindset, and a focus on production-grade AI/ML solutions.
Posted 1 month ago
6.0 - 11.0 years
20 - 30 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
IMMEDIATE JOINERS ONLY Job Title: Senior MLOps Engineer Location : NCR Location (WFO) Note: DevOps -Knowledge is fine Experience Range: 6-12 years Primary Key skills: MLOps Key Responsibilities: Design, develop and maintain end-to-end MLOps pipelines for model deployment, monitoring, maintenance, and scalability. Automate the retraining, testing, and validation processes for ML models. Collaborate with cross-functional teams, including data science, software engineering and DevOps, to integrate ML models into production systems. Monitor model performance, diagnose issues, and implement improvements. Ensure scalability, reliability, and compliance of ML systems in production. Optimize infrastructure costs while maintaining high system performance. Stay up-to-date with the latest developments in MLOps, machine learning and AI, and apply new techniques to improve existing models and processes. Qualifications: Education: Bachelors or Master’s degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field. Experience: Solid experience as a MLOps Engineer, Machine Learning Engineer, DevOps Engineer, or similar role. Experience in the retail industry or e-commerce is highly desirable. Technical Skills: Strong experience with Infrastructure as Code frameworks and languages (Terraform, Bicep or ARM) Strong programming skills in Python and experience with ML frameworks (TensorFlow, PyTorch, etc.). Hands-on experience with containerization (Docker) and orchestration tools (Kubernetes). Proficiency in CI/CD tools and cloud platforms (AWS, Azure, or Google Cloud). Knowledge of model monitoring and evaluation metrics. Familiarity with version control systems, such as Git, and model versioning tools like MLflow or DVC. Experience with Generative AI product deployment is desirable. Experience with big data using Databricks, Snowflake, Apache Spark or Hadoop is desirable. – some of these System level architecture understanding including scaling, MLOps, model/data monitoring, andensuring a deterministic pipeline. Soft Skills: Strong problem-solving skills with the ability to work independently and collaboratively in a fast-paced environment. Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. A proactive attitude and a passion for continuous learning and innovation.
Posted 1 month ago
9.0 - 12.0 years
40 - 45 Lacs
Noida, Bengaluru, Delhi / NCR
Work from Office
Mandatory skill set: Python, any cloud, MLE, Deep learning, NLP, MLOPS, {Gen AI (LLM,Langchain,RAG ,Open AI}
Posted 1 month ago
10.0 - 17.0 years
9 - 15 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Dear Candidate, Please find below job description Role :- MLOps + ML Engineer Job Description: Role Overview: We are looking for a highly experienced MLOps and ML Engineer to lead the design, deployment, and optimization of machine learning systems at scale. This role requires deep expertise in MLOps practices, CI/CD automation, and AWS SageMaker, with a strong foundation in machine learning engineering and cloud-native development. Key Responsibilities: Architect and implement robust MLOps pipelines for model development, deployment, monitoring, and governance. Lead the operationalization of ML models using AWS SageMaker and other AWS services. Build and maintain CI/CD pipelines for ML workflows using tools like GitHub Actions, Jenkins, or AWS CodePipeline. Automate model lifecycle management including retraining, versioning, and rollback. Collaborate with data scientists, ML engineers, and DevOps teams to ensure seamless integration and scalability. Monitor production models for performance, drift, and reliability. Establish best practices for reproducibility, security, and compliance in ML systems. Required Skills: 10+ years of experience in ML Engineering, MLOps, or related fields. Deep hands-on experience with AWS SageMaker, Lambda, S3, CloudWatch, and related AWS services. Strong programming skills in Python and experience with Docker, Kubernetes, and Terraform. Expertise in CI/CD tools and infrastructure-as-code. Familiarity with model monitoring tools (e.g., Evidently, Prometheus, Grafana). Solid understanding of ML algorithms, data pipelines, and production-grade systems. Preferred Qualifications: AWS Certified Machine Learning Specialty or DevOps Engineer certification. Experience with feature stores, model registries, and real-time inference systems. Leadership experience in cross-functional ML/AI teams. Primary Skills: MLOps, ML Engineering, AWS related services (SageMaker/S3/CloudWatch) Regards Divya Grover +91 8448403677
Posted 1 month ago
6.0 - 11.0 years
20 - 35 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Location: Bangalore/Noida/Pune/Gurgaon Education: B.E. / B. Tech / M.E. / M. Tech / MCA Job Responsibilities: Model Deployment and Management: Drive ML prototypes into production ensuring seamless deployment and management on cloud at scale. Monitor real-time performance of deployed models, analyze data, and proactively address performance issues. Troubleshoot and resolve production issues related to ML model deployment, performance, and scalability. Collaboration and Integration: Collaborate with DevOps engineers to manage cloud compute resources for ML model deployment and performance optimization. Work closely with ML scientists, software engineers, data engineers, and other stakeholders to implement best practices for MLOps, including CI/CD pipelines, version control, model versioning, and automated deployment. Innovation and Continuous Improvement: Stay updated with the latest advancements in MLOps technologies and recommend new tools and techniques. Contribute to the continuous improvement of team processes and workflows. Share knowledge and expertise to promote a collaborative learning environment. Development and Documentation: Build software to run and support machine-learning models. Develop and maintain documentation, standard operating procedures, and guidelines related to MLOps processes. Participate in fast iteration cycles and adapt to evolving project requirements. Business Solutions and Strategy: Propose solutions and strategies to business challenges. Collaborate with Data Science team, Front End Developers, DBA, and DevOps teams to shape architecture and detailed designs. Mentorship: Conduct code reviews and mentor junior team members. Foster strong interpersonal skills, excellent communication skills, and collaboration skills within the team. Mandatory Skills: Programming Languages: Proficiency in Python (3.x) and SQL. ML Frameworks and Libraries: Extensive knowledge of ML frameworks, libraries, data structures, data modeling, and software architecture. Databases: Proficiency in SQL and NoSQL databases. Mathematics and Algorithms: In-depth knowledge of mathematics, statistics, and algorithms. ML Modules and REST API: Proficient with ML modules and REST API. Version Control: Hands-on experience with version control applications (GIT). Model Deployment and Monitoring: Experience with model deployment and monitoring. Data Processing: Ability to turn unstructured data into useful information (e.g., auto-tagging images, text-to-speech conversions). Problem-Solving: Analytically agile with strong problem-solving capabilities. Learning Agility: Quick to learn new concepts and eager to explore and build new features. Qualifications: Education: Bachelors or Master’s degree in Computer Science, Data Science, or a related field. Experience: Minimum of 6 years of hands-on experience in MLOps, deploying and managing machine learning models in production environments, preferably in cloud-based environments. Role & responsibilities Preferred candidate profile
Posted 1 month ago
5.0 - 10.0 years
13 - 23 Lacs
Kolkata, Hyderabad, Bengaluru
Hybrid
Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of [Principal MLOps Engineer]! In this role, you will lead the design, scale, and governance of our AI/GenAI delivery platform across the organization. This role owns the vision, architecture, and execution of production-grade MLOps systems supporting multiple domains including classical ML, NLP, computer vision, and GenAI use cases. This leader will drive enterprise-wide standardization of CI/CD, IAC-based cloud infrastructure, model governance, monitoring, and risk control — enabling AI at scale, securely and responsibly. Responsibilities Define and evolve the enterprise MLOps reference architecture (build test deploy monitor retrain) Architect a multi-tenant AI platform with native GenAI support (LLMs, RAG, LangChain/Bedrock/OpenAI integrations) Own all infrastructure-as-code strategy, with modular, reusable Terraform stacks for multi-cloud/hybrid deployments Lead CI/CD modernization across teams (GitHub Actions, CodePipeline, Argo Workflows) Establish centralized model governance, access control, and explainability standards (integrating tools like SHAP, LIME, CloudWatch, SageMaker Model Monitor) Champion infrastructure observability and compliance logging for regulated environments (banking, insurance, healthcare) Represent MLOps in enterprise-wide architecture reviews and cloud optimization boards Qualifications we seek in you! Minimum Qualifications • 8 - 10 years in DevOps, Cloud, or ML Infrastructure roles. • 5+ years leading MLOps initiatives at scale in production. • Degree/qualification in Computer Science or a related field, or equivalent work experience • Deep AWS expertise (SageMaker, Lambda, VPC, CloudWatch, IAM). • Strong Python and Terraform skills, with proven CI/CD implementation track record. • Experience with deploying GenAI models (OpenAI, Bedrock, Hugging Face) and managing inference at scale. • Engaging in the design, development and maintenance of data pipelines for various AI use cases • Active contribution to key deliverables as part of an agile development team • Experience designing model governance frameworks and CI/CD pipelines. • Familiarity with LangChain, Bedrock, and OpenAI API integrations. • Collaborating with others to source, analyse, test and deploy data processes Preferred Qualifications/ Skills Leadership experience in BFSI, healthcare, or regulated industries. Advanced understanding of platform security, cost optimization, and ML observability. Must have Experience developing, testing, and deploying data pipelines Influence over tooling selection, hiring, architecture reviews, and platform roadmap. Exposure to BFSI or regulated environments. Experience developing, testing, and deploying data pipelines Clear and effective communication skills to interact with team members, stakeholders and end users Degree/qualification in Computer Science or a related field, or equivalent work experience Knowledge of governance and compliance policies, standards, and procedures Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted 1 month ago
8.0 - 13.0 years
18 - 33 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Key Responsibilities: Design, develop a nd maintain end-to-end MLOps pipelines for model deployment, monitoring, maintenance, and scalability. Automate the retraining, testing, and validation processes for ML models . Collaborate with cross-functional teams, including data science, software engineering and DevOps, to integrate ML models into production systems. Monitor model performance, diagnose issues, and implement improvements. Ensure scalability, reliability, and compliance of ML systems in production. Optimize infrastructure costs while maintaining high system performance. Stay up-to-date with the latest developments in MLOps, machine learning and AI, and apply new techniques to improve existing models and processes. Qualifications: Education: Bachelors or Master’s degree in Computer Science, Data Science, Statistics, Applied Mathematics, or a related field. Experience: Solid experience as a MLOps Engineer, Machine Learning Engineer, DevOps Engineer, or similar role. Experience in the retail industry or e-commerce is highly desirable. Technical Skills: Strong experience with Infrastructure as Code frameworks and languages (Terraform, Bicep or ARM) Strong programming skills in Python and experience with ML frameworks (TensorFlow, PyTorch, etc.). Hands-on experience with containerization (Docker) and orchestration tools (Kubernetes). Proficiency in CI/CD tools and cloud platforms (AWS, Azure, or Google Cloud). Knowledge of model monitoring and evaluation metrics. Familiarity with version control systems , such as Git, and model versioning tools like MLflow or DVC. Experience with Generative AI product deployment is desirable. Experience with big data using Databricks, Snowflake, Apache Spark or Hadoop is desirable. – some of these System level architecture understanding including scaling, MLOps, model/data monitoring, and ensuring a deterministic pipeline. Soft Skills: Strong problem-solving skills with the ability to work independently and collaboratively in a fast-paced environment. Excellent communication skills, with the ability to explain complex technical concepts to non-technical stakeholders. A proactive attitude and a passion for continuous learning and innovation.
Posted 1 month ago
9.0 - 14.0 years
25 - 40 Lacs
Pune
Work from Office
You will be a key member of the Data + AI Pipeline team, leading to the integration of Kubeflow, Kubernetes, Docker, Keda, and Python technologies. Your role will involve developing and maintaining AI pipelines that support various projects, ensuring seamless and efficient data processing, model training, and deployment. As part of a dynamic and interdisciplinary team, you will collaborate with experts in data engineering, AI, and software development to create robust and scalable solutions. Job Description • We are seeking a motivated and experienced Data + AI Pipeline Engineer to lead the development and maintenance of our KION Machine Vision AI pipeline infrastructure. • As a Data + AI Pipeline Lead, you will provide technical leadership and strategic direction to the team, be responsible for designing and implementing scalable and efficient AI pipelines using Kubeflow, Kubernetes, Docker, Keda, and Python. • You will collaborate with cross-functional teams to understand project requirements, define data processing workflows, and ensure the successful deployment of machine learning models. • Your role will involve integrating and optimizing data processing and machine learning components within our pipeline architecture. • You will provide technical leadership and mentorship to junior team members, guiding them in the design and implementation of AI pipelines and fostering their professional growth. • Conduct code reviews and provide constructive feedback to ensure the quality, readability, and maintainability of codebase across the team. • Collaborate with the software engineering team to ensure the seamless integration of AI pipelines with other software applications. • Implement and maintain CI/CD pipelines to automate the deployment of AI models and ensure continuous integration and delivery. • Work closely with external partners and vendors to leverage the latest advancements in AI and data processing technologies. Qualifications : • A university degree with a technical focus, preferably in computer science, data science, or a related field. • 10+ years of experience in building and maintaining large-scale AI pipeline projects, with at least 2 years in a leadership or managerial role. • Hands-on experience with cloud platforms (e.g., AWS, Azure, Google Cloud) and services for data processing, storage, and deployment. • Strong programming skills in Python and proficiency in using libraries and frameworks for data manipulation, analysis, and visualization (e.g., pandas, NumPy, matplotlib). • Knowledge of containerization technologies (e.g., Docker, Kubernetes) and orchestration tools for deploying and managing machine learning pipelines at scale. • Expertise in designing and optimizing data processing workflows and machine learning pipelines. • Strong communication skills to collaborate effectively with cross-functional teams and present complex technical concepts in a clear manner. • Experience with CI/CD, automation, and a strong understanding of software engineering best practices. • Ability to overview complex software architectures and contribute to future-oriented developments in the field of AI and data processing. • Excellent problem-solving skills and the ability to work in a dynamic and fast-paced environment. • Leading and guiding, technical mentoring to the team • Very good English skills, both written and verbal, to facilitate effective communication within the global team.
Posted 1 month ago
4.0 - 5.0 years
2 - 6 Lacs
Ahmedabad
Work from Office
Key Responsibilities : - Conduct feature engineering, data analysis, and data exploration to extract valuable insights. - Develop and optimize Machine Learning models to achieve high accuracy and performance. - Design and implement Deep Learning models, including Artificial Neural Networks (ANN), Convolutional Neural Networks (CNN), and Reinforcement Learning techniques. - Handle real-time imbalanced datasets and apply appropriate techniques to improve model fairness and robustness. - Deploy models in production environments and ensure continuous monitoring, improvement, and updates based on feedback. - Collaborate with cross-functional teams to align ML solutions with business goals. - Utilize fundamental statistical knowledge and mathematical principles to ensure the reliability of models. - Bring in the latest advancements in ML and AI to drive innovation. Requirements : - 4-5 years of hands-on experience in Machine Learning and Deep Learning. - Strong expertise in feature engineering, data exploration, and data preprocessing. - Experience with imbalanced datasets and techniques to improve model generalization. - Proficiency in Python, TensorFlow, Scikit-learn, and other ML frameworks. - Strong mathematical and statistical knowledge with problem-solving skills. - Ability to optimize models for high accuracy and performance in real-world scenarios. Preferred Qualifications : - Experience with Big Data technologies (Hadoop, Spark, etc.) - Familiarity with containerization and orchestration tools (Docker, Kubernetes). - Experience in automating ML pipelines with MLOps practices. - Experience in model deployment using cloud platforms (AWS, GCP, Azure) or MLOps tools.
Posted 1 month ago
3.0 - 8.0 years
8 - 14 Lacs
Hyderabad
Work from Office
About PureCode AI : PureCode is a front-end developer tool where engineers can use text to describe and generate or customize software user interfaces - (and soon entire projects). Our goal is to build a must use developer tool for front-end engineers to build web software 100x faster! We are headquartered in Austin, TX, USA with engineering offices in Hyderabad, India. This position is for exclusively in office work at our Q City Office in Hyderabad. Responsibilities : - Designing and implementing advanced state-of-the-art AI models, specializing in LLMs, VLMs and MLLMs. - Develop ML models, fine tune and work with stakeholders in finalising champion models. Perform scientific evaluation of NLP/LLM models, come up with new techniques for model validation, evaluation, trust and safety. - Understand how customer business needs are mapped to AI/ML problem and solution involving Algorithms/Models. - Support the process of Translating business problems into ML problems and create ML solutions to produce desired customer business outcomes. - Develop MLOps (Machine Learning Operations) workflows for data preparation, deployment, monitoring and retraining. Create and own cloud native API to deploy ML Models - Craft data warehousing strategy alongwith instrumenting ETL pipelines for maintaining quality data to be used for Model Training. - Design and implement A/B experiments - user segmentation, user classification, and tooling to support A/B experiments. Qualifications : - 3+ years of working experience developing, deploying, tracking and orchestrating scalable ML/AI solutions. - Familiarity with cloud platforms and services such as AWS, Azure, or GCP for deploying and scaling AI solutions. - Knowledge of basic ML stack - Pytorch, tensorflow, sklearn, numpy, pandas, etc. - Familiarity with experiment tracking tools like Weights & Biases, TensorBoard, or ClearML. - Hands on experience with MLOps tools like Mlflow, CometML, Docker, etc. - Hand on experience with workflow orchestration tools like Airflow, Prefect, or Databricks, etc. - Knowledge of API frameworks Django, Flask etc. - Knowledge of web development tools is a plus. - Knowledge of LLMops, langchain framework is a plus
Posted 1 month ago
3.0 - 8.0 years
8 - 14 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Role : AI Research Engineer - AI Pair Programming About Us : We are developing a cutting-edge AI pair programming extension for VS Code, aimed at revolutionising the way developers write code. Our mission is to enhance developer productivity and code quality through advanced AI assistance. We're seeking an experienced AI Research Engineer to join our innovative team at PureCode AI and help push the boundaries of AI in software development. Position Overview : We are looking for an AI Research Engineer with a strong background in Large Language Models (LLMs) and their applications in code generation and understanding. The ideal candidate will contribute to our research efforts in contextual retrievals, rerankers, and vector databases, with a focus on optimising developer workflows. Key Responsibilities : - Conduct advanced research in AI and machine learning, specifically focusing on LLMs for code generation and comprehension - Implement or optimise state-of-the-art AI models to enhance our AI pair programming capabilities - Develop and refine algorithms for contextual code retrieval, reranking, and efficient use of vector databases - Collaborate with the development team to integrate research findings into our VS Code extension - Contribute to the improvement of developer workflow optimisation through AI-assisted coding - Evaluate and fine-tune models to ensure high-quality code suggestions and analysis - Stay abreast of the latest advancements in AI research and propose innovative applications for our product Qualifications : - Bachelor's or Master's degree in Computer Science, Artificial Intelligence, or a related field - Minimum of 3 years of experience in AI engineering and research, with a focus on NLP and LLMs - Strong background in machine learning, deep learning, and natural language processing - Expertise in implementing and deploying AI models for code analysis and generation - Experience with contextual retrieval systems, rerankers, and vector databases - Proficiency in Python and familiarity with machine learning frameworks such as PyTorch or TensorFlow - Solid understanding of API development practices and tools, particularly in the context of Generative AI. - Familiarity with VS Code extension development is a plus Required Skills : - Strong problem-solving skills and ability to translate research into practical applications - Excellent programming skills and ability to write clean, efficient code - Good understanding of software engineering principles and best practices - Ability to work effectively in a collaborative team environment - Strong communication skills to explain complex AI concepts to both technical and non-technical stakeholders - Self-motivated with the ability to work independently on research projects Preferred Experience : - Previous work on AI-assisted coding tools or IDE plugins - Contributions to open-source AI or developer tool projects - Experience with large-scale machine learning model deployment and optimisation - Familiarity with MLOps practices and tools What We Offer : - Opportunity to work on cutting-edge AI technology that directly impacts developers worldwide - Collaborative and innovative work environment - Competitive salary and benefits package - Professional development opportunities and support for attending relevant conferences - Chance to publish and present research findings in academic and industry forums Location- Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough