Jobs
Interviews

1795 Mlflow Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* The Data Analytics Strategy platform and decision tool team is responsible for Data strategy for entire CSWT and development of platforms which supports the Data Strategy. Data Science platform, Graph Data Platform, Enterprise Events Hub are key platforms of Data Platform initiative. Job Description* We're looking for a highly skilled Container Platform Engineer to architect, implement, and manage our cloud-agnostic Data Science and Analytical Platform. Leveraging OpenShift (or other Kubernetes distributions) as the core container orchestration layer, you'll build a scalable and secure infrastructure vital for ML workloads and shared services. This role is key to establishing a robust hybrid architecture, paving the way for seamless future migration to AWS, Azure, or GCP. This individual will closely with data scientists, MLOps engineers, and platform teams to enable efficient model development, versioning, deployment, and monitoring within a multi-tenant environment. Responsibilities* Responsible for developing risk solutions to meet enterprise-wide regulatory requirements. Performs Monitoring and managing of large systems/platforms efficiently. Contributes to story refinement and definition of requirements. Participates in estimating work necessary to realize a story/requirement through the delivery lifecycle. Mentor team members, advocate best practices, and promote a culture if continuous improvement and innovation in engineering processes. Develop efficient utilities, automation frameworks, data science platforms that can be utilized across multiple Data Science teams. Propose/Build variety of efficient Data pipelines to support the ML Model building & deployment. Propose/Build automated deployment pipelines to enable self-help continuous deployment process for the Data Science teams. Analyze, understand, execute and resolve the issues in user scripts / model / code. Perform release and upgrade activities as required. Well versed in the open-source technology and aware of emerging 3rd party technology & tools in AI-ML space. Ability to fire fight, propose fix, guide the team towards day-to-day issues in production. Ability to train partner Data Science teams on frameworks and platform. Flexible with time and shift to support the project requirements. It doesn’t include any night shift. This position doesn’t include any L1 or L2 (first line of support) responsibility. Requirements* Education* Graduation / Post Graduation: BE/B.Tech/MCA/MTech Certifications If Any: Azure, AWS, GCP, Data Bricks Experience Range* 9+ Years Foundational Skills* Platform Design & Deployment: Design and deploy a comprehensive data science tech stack on OpenShift (or other Kubernetes distributions), including support for Jupyter notebooks, model training pipelines, inference services, and internal APIs. Cloud-Agnostic Architecture: Proven ability to build a cloud-agnostic container platform capable of seamless migration from on-prem OpenShift to cloud-native Kubernetes on AWS, Azure, or GCP. Container Platform Management: Expertise in configuring and managing multi-tenant namespaces, RBAC, network policies, and resource quotas within Kubernetes/OpenShift environments. API Gateway & Security: Hands-on experience with API gateway technologies like Apache APISIX (or similar tools) for managing and securing API traffic, including JWT/OAuth2-based authentication. MLOps Toolchain Support: Experience deploying and maintaining critical MLOps toolchains such as MLflow, Kubeflow, model registries, and feature stores. CI/CD & GitOps: Strong integration experience with GitOps and CI/CD tools (e.g., ArgoCD, Jenkins, GitHub Actions) for automating ML model and infrastructure deployment workflows. Microservices Deployment: Ability to deploy and maintain containerized microservices using Python frameworks (FastAPI, Flask) or Node.js to serve ML APIs. Observability: Ensure comprehensive observability across platform components using industry-standard tools like Prometheus, Grafana, and EFK/ELK stacks. Infrastructure as Code (IaC): Proficiency in automating platform provisioning and configuration using Infrastructure as Code tools (Terraform, Ansible, or Helm). Policy & Governance: Expertise with Open Policy Agent (OPA) or similar policy-as-code frameworks for implementing and enforcing robust governance policies. Desired Skills* Lead the design, development, and implementation of scalable, high-performance applications using Python/Java/Scala. Apply expertise in Machine Learning (ML) to build predictive models, enhance decision-making capabilities, and drive business insights. Collaborate with cross-functional teams to design, implement, and optimize cloud-based architectures on AWS and Azure. Work with large-scale distributed technologies like Apache Kafka, Apache Spark, and Apache Storm to ensure seamless data processing and messaging at scale. Provide expertise in Java multi-threading, concurrency, and other advanced Java concepts to ensure the development of high-performance, thread-safe, and optimized applications. Architect and build data lakes and data pipelines for large-scale data ingestion, processing, and analytics. Ensure integration of complex systems and applications across various platforms while adhering to best practices in coding, testing, and deployment. Collaborate closely with stakeholders to understand business requirements and translate them into technical specifications. Manage technical risk and work on performance tuning, scalability, and optimization of systems. Provide leadership to junior team members, offering guidance and mentorship to help develop their technical skills. Effective communication, Strong stakeholder engagement skills, Proven ability in leading and mentoring a team of software engineers in a dynamic environment. Security Architecture: Understanding of zero-trust security architecture and secure API design patterns. Model Serving Frameworks: Knowledge of specialized model serving frameworks like Triton Inference Server. Vector Databases: Familiarity with Vector databases (e.g., Redis, Qdrant) and embedding stores. Data Lineage & Metadata: Exposure to data lineage and metadata management using tools like DataHub or OpenMetadata. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Chennai

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain.

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location: Hyderabad Work Mode: Hybrid (2–3 days on-site/week) Experience Required: Minimum 3+ Years Salary Range: ₹6 – ₹10 LPA (based on experience and skill set) Disclaimer – Please Read Before You Apply We are not accepting applications from freshers or freelance-only profiles . This role requires prior corporate experience (minimum 2 years) with a clear understanding of how to automate contact extraction workflows using Python and NLP. This role goes beyond writing Python scripts—it’s about applying NLP (Natural Language Processing) in real-world scenarios where data isn’t clean or structured. We are looking for someone who can extract contact information like names, emails, phone numbers, and company details from documents such as PDFs, resumes, scanned images, and emails —even when the formats vary or the layout is messy. You should be able to: Understand the structure and flow of unstructured text Apply NLP concepts to locate and extract relevant contact details Build logic to automate this extraction across different document types Think critically and creatively to handle inconsistent data inputs You don’t need to rely on pre-built solutions—we value your ability to reason through the problem and implement your own approach . If you're passionate about NLP and love solving messy data problems, we encourage you to apply. To ensure you've read this section thoroughly, we’ve included a small check in the application process. The keyword is the number 7 — you’ll be asked for it at the end of the form. About the Role A leading AI-focused organization is looking for a Junior Python Developer with strong experience in machine learning and natural language processing. This is a great opportunity to work closely with senior engineers on cutting-edge AI initiatives that involve building, training, and deploying intelligent models at scale. Key Responsibilities Develop and optimize Python scripts for data preprocessing, training, and evaluating NLP/ML models. Contribute to ML pipeline development using Scikit-learn, TensorFlow, or PyTorch. Deploy ML models via REST APIs using Flask or FastAPI. Handle data in various formats such as JSON, CSV, and PDF using spaCy, PyMuPDF, or regex-based logic. Participate in testing, debugging, and validation of machine learning workflows. Maintain documentation on code, model performance, and technical decisions. Stay updated with tools like Hugging Face, LangChain, and MLflow. Mandatory Skills Minimum 3 years of hands-on Python development experience. Proficient in Python libraries such as NumPy, Pandas, and Matplotlib. Experience with NLP tasks using tools like spaCy or regex for rule-based NER. Understanding of core ML algorithms: classification, regression, clustering. Familiar with ML frameworks: Scikit-learn, TensorFlow, or PyTorch. Experience developing and consuming REST APIs. Proficient with Git and collaborative version control. Nice to Have Experience with Hugging Face Transformers or LangChain. Familiarity with MLflow or similar model lifecycle tools. Exposure to OCR or intelligent document processing projects. How to Apply Please send your updated resume to komal.bansal@zetamicron.com Shortlisted candidates will be contacted for further discussions.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Data Science Specialist Experience Level: 6+ Years (Overall), 3+ Years (Relevant) Job Summary: We are seeking a highly skilled and experienced Data Science Specialist to join our team. The ideal candidate will have a strong foundation in data science , with a proven ability to design, develop, and deploy intelligent systems. This role requires expertise in advanced analytics, statistical modeling, and machine learning, with a focus on delivering scalable solutions. Exposure to Microsoft AI tools and agentic AI paradigms will be a strong advantage. Key Responsibilities: Design and implement advanced data science models to solve business problems. Work with structured and unstructured data to derive insights using statistical and machine learning techniques. Collaborate with stakeholders to identify use cases and develop data-driven strategies. Develop and deploy AI solutions on cloud platforms, preferably Microsoft Azure. Support and implement MLOps practices to ensure seamless deployment and monitoring of models. Explore and experiment with Agentic AI paradigms including autonomous agents, multi-agent systems, and orchestration frameworks. Document models, processes, and workflows and ensure compliance with best practices. Mandatory Skills: Strong experience in Data Science , including ML algorithms, statistical modeling, and data wrangling. Hands-on experience with Python/R and associated data science libraries (e.g., pandas, scikit-learn, TensorFlow, PyTorch). Solid understanding of data processing and visualization tools. Good-to-Have Skills: Experience with Microsoft AI tools and services (e.g., Azure Machine Learning, Cognitive Services, Azure OpenAI). Familiarity with Agentic AI paradigms – including autonomous agents, multi-agent systems, and orchestration frameworks. Strong exposure to cloud platforms , with preference for Microsoft Azure . Experience with MLOps tools and practices, such as MLflow, Kubeflow, or Azure ML pipelines. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, AI, or a related field. 6+ years of overall industry experience with at least 3 years in core data science roles. Excellent problem-solving skills and ability to work in a collaborative, team-oriented environment. SigniTeq is a Global Technology Services Company focused on building New-Age Technology Solutions and Augmenting Technology Teams for many World’s Leading Brands and Large Global Enterprises., We bring together some of the Brightest Minds in Open-Source and Emerging Technologies, operating from our Offices in India, UAE, USA, Mexico, and Australia and engage a strong presence of Technology Resources across 100+ Countries. Our Credentials Include Wonderful Workplaces To Shape Your Career - 2021 by The CEOStory, Top 20 Most Promising Blockchain Companies - 2020 by CIOReview, Winner Of Innovative Startup Solution Challenge For Combating Covid 19, Govt Of India Winner Of WhatsApp Challenge by Facebook Corporation ISO 9001 – 2015 Quality Certified Company ISO 27001 – 2013 ISMS Certified Company An AWS Partner We Offer · 5 Days Working · Medical Coverage · World Class Client Brands · Prime Office Location · Great Employee Engagements · Emerging Technology Practices · Learning Experience From Leaders For more information please visit : www.SigniTeq.com

Posted 4 days ago

Apply

0.0 - 1.0 years

8 - 14 Lacs

Hyderabad, Telangana

On-site

Job Title: Senior Python Developer – AI/ML Document Automation Location: Hyderabad Work Mode: Hybrid Experience: 5+ Years Job Summary: We are looking for a highly skilled Senior Python Developer with deep expertise in AI/ML and document automation . The ideal candidate will lead the design and development of intelligent systems for extracting and processing structured and unstructured data from documents such as invoices, receipts, contracts, and PDFs. This role involves both hands-on coding and architectural contributions to scalable automation platforms. Roles and Responsibilities: Design and develop modular Python applications for document parsing and intelligent automation. Build and optimize ML/NLP pipelines for tasks like Named Entity Recognition (NER), classification, and layout-aware data extraction. Integrate rule-based and AI-driven techniques (e.g., regex, spaCy, PyMuPDF, Tesseract) to handle diverse document formats. Develop and deploy models via REST APIs using FastAPI or Flask, and containerize with Docker. Collaborate with cross-functional teams to define automation goals and data strategies. Conduct code reviews, mentor junior developers, and uphold best coding practices. Monitor model performance and implement feedback mechanisms for continuous improvement. Maintain thorough documentation of workflows, metrics, and architectural decisions. Mandatory Skills: Expert in Python (OOP, asynchronous programming, modular design). Strong foundation in machine learning algorithms and natural language processing techniques. Hands-on experience with Scikit-learn, TensorFlow, PyTorch, and Hugging Face Transformers. Proficient in developing REST APIs using FastAPI or Flask. Experience in PDF/text extraction using PyMuPDF, Tesseract, or similar tools. Skilled in regex-based extraction and rule-based NER. Familiar with Git, Docker, and any major cloud platform (AWS, GCP, or Azure). Exposure to MLOps tools such as MLflow, Airflow, or LangChain. Job Type: Full-time Pay: ₹800,000.00 - ₹1,400,000.00 per year Benefits: Provident Fund Schedule: Day shift Monday to Friday Application Question(s): Are you an immediate Joiner? Experience: Python : 2 years (Required) AI/ML: 2 years (Required) NLP: 1 year (Required) Location: Hyderabad, Telangana (Required) Work Location: In person

Posted 4 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Description – AI Developer (Agentic AI Frameworks, Computer Vision & LLMs) Location (Hybrid - Bangalore) About the Role We’re seeking an AI Developer who specializes in agentic AI frameworks —LangChain, LangGraph, CrewAI, or equivalents—and who can take both vision and language models from prototype to production. You will lead the design of multi‑agent systems that coordinate perception (image classification & extraction), reasoning, and action, while owning the end‑to‑end deep‑learning life‑cycle (training, scaling, deployment, and monitoring). Key Responsibilities Scope What You’ll Do Agentic AI Frameworks (Primary Focus) Architect and implement multi‑agent workflows using LangChain, LangGraph, CrewAI, or similar. Design role hierarchies, state graphs, and tool integrations that enable autonomous data processing, decision‑making, and orchestration. Benchmark and optimize agent performance (cost, latency, reliability). Image Classification & Extraction Build and fine‑tune CNN/ViT models for classification, detection, OCR, and structured data extraction. Create scalable data‑ingestion, labeling, and augmentation pipelines. LLM Fine‑Tuning & Retrieval‑Augmented Generation (RAG) Fine‑tune open‑weight LLMs with LoRA/QLoRA, PEFT; perform SFT, DPO, or RLHF as needed. Implement RAG pipelines using vector databases (FAISS, Weaviate, pgvector) and domain‑specific adapters. Deep Learning at Scale Develop reproducible training workflows in PyTorch/TensorFlow with experiment tracking (MLflow, W&B). Serve models via TorchServe/Triton/KServe on Kubernetes, SageMaker, or GCP Vertex AI. MLOps & Production Excellence Build robust APIs/micro‑services (FastAPI, gRPC). Establish CI/CD, monitoring (Prometheus, Grafana), and automated retraining triggers. Optimize inference on CPU/GPU/Edge with ONNX/TensorRT, quantization, and pruning. Collaboration & Mentorship Translate product requirements into scalable AI services. Mentor junior engineers, conduct code and experiment reviews, and evangelize best practices. Minimum Qualifications B.S./M.S. in Computer Science, Electrical Engineering, Applied Math, or related discipline. 5+ years building production ML/DL systems with strong Python & Git . Demonstrable expertise in at least one agentic AI framework (LangChain, LangGraph, CrewAI, or comparable). Proven delivery of computer‑vision models for image classification/extraction. Hands‑on experience fine‑tuning LLMs and deploying RAG solutions. Solid understanding of containerization (Docker) and cloud AI stacks (AWS/Azure). Knowledge of distributed training, GPU acceleration, and performance optimization. ---------------------------------------------------------------------------------------------------------------------------------------------------------- Job Type: Full-time Pay: Up to ₹1,200,000.00 per year Experience: AI, LLM, RAG: 4 years (Preferred) Vector database, Image classification: 4 years (Preferred) containerization (Docker): 3 years (Preferred) ML/DL systems with strong Python & Git: 3 years (Preferred) LangChain, LangGraph, CrewAI: 3 years (Preferred) Location: Bangalore, Karnataka (Preferred) Work Location: In person

Posted 5 days ago

Apply

7.0 - 9.0 years

8 - 15 Lacs

Bengaluru, Karnataka, India

On-site

Job description Bachelor s in Computer Science or equivalent experience 7+ years of experience in building and managing big-data platforms and programming experience in Java Working experience with search and information retrieval (Lucene, Solr, Elasticsearch, Milvus, Vespa) BS in Computer Science or equivalent Experience scaling distributed ML and AI systems to handle millions of concurrent requests is desirable Experience with ML applications in cross domain contexts is a plus Experience with public cloud (AWS/GCP) Good understanding of AI/ML stack - GPUs, MLFlow, LLM models is a plus. Understanding of data modeling, data warehousing, and ETL concepts is a plus Have created frameworks to deploy platforms in AWS/Azure/GCP. Ability to lead and mentor junior team members, provide technical guidance, and collaborate effectively with multi-functional teams to deliver complex projects. Commitment to staying updated with the latest advancements in machine learning and data science, and willingness to learn new tools and technologies as needed.

Posted 5 days ago

Apply

5.0 - 10.0 years

6 - 12 Lacs

Bengaluru, Karnataka, India

On-site

Job description At Apple, code and functional quality is always at the forefront and one of the key measures of Success. As part of the Applied Machine Learning team, you will be responsible for conceptualising, designing, implementing and running cutting edge solutions, leveraging best suited ML, AI and NLP techniques and contributing to building reusable platforms. Our group has a broad impact, so you can expect to be cross functional and see new wondrous and challenging use cases where modern day technologies can help solve real world problems. We are looking for someone who has a proven track record in crafting and developing high quality enterprise software solutions. This position requires a hands on person, who is passionate about understanding the details of a problem, can think about different solutions and can advise a team by example when the time to implement comes. Were looking for a self-starting, upbeat individual with excellent written and communication skills, hardworking person who is a great teammate, but not afraid to question assumptions and take initiative. If you feel this is you, wed love to hear from you. Minimum Qualifications Minimum Qualifications Bachelor s in Computer Science or equivalent experience 7+ years of experience in building and managing big-data platforms and programming experience in Java Working experience with search and information retrieval (Lucene, Solr, Elasticsearch, Milvus, Vespa) BS in Computer Science or equivalent Preferred Qualifications Experience scaling distributed ML and AI systems to handle millions of concurrent requests is desirable Experience with ML applications in cross domain contexts is a plus Experience with public cloud (AWS/GCP) Good understanding of AI/ML stack - GPUs, MLFlow, LLM models is a plus. Understanding of data modeling, data warehousing, and ETL concepts is a plus Have created frameworks to deploy platforms in AWS/Azure/GCP. Ability to lead and mentor junior team members, provide technical guidance, and collaborate effectively with multi-functional teams to deliver complex projects. Commitment to staying updated with the latest advancements in machine learning and data science, and willingness to learn new tools and technologies as needed.

Posted 5 days ago

Apply

2.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary: We are seeking a highly motivated and skilled LLM Engineer with 2 to 7 years of professional experience to join our growing AI team. The ideal candidate will have a strong background in natural language processing, machine learning, and hands-on experience in developing, deploying, and optimizing solutions built upon Large Language Models. You will play a crucial role in designing, implementing, and maintaining robust and scalable LLM-powered applications, contributing to the full lifecycle of our AI products. Key Responsibilities: LLM Application Development: Design, develop, and deploy innovative applications leveraging state-of-the-art Large Language Models (e.g., GPT, Llama, Gemini, Claude, etc.). This includes working with various LLM APIs and open-source models. Prompt Engineering & Optimization: Develop and refine advanced prompt engineering techniques to maximize LLM performance, accuracy, and desired output for specific use cases. Fine-tuning & Adaptation: Experiment with and implement strategies for fine-tuning pre-trained LLMs on custom datasets to improve performance for domain-specific tasks. Data Preparation & Curation: Work with diverse datasets for training, fine-tuning, and evaluating LLMs, ensuring data quality, relevance, and ethical considerations. Model Evaluation & Benchmarking: Design and execute robust evaluation methodologies to assess LLM performance, identify biases, and ensure alignment with business objectives. Implement A/B testing and other experimentation frameworks. Integration & Deployment: Integrate LLM-powered solutions into existing systems and deploy them to production environments, ensuring scalability, reliability, and low latency. Experience with MLOps practices is highly desirable. Performance Optimization: Identify and implement strategies for optimizing LLM inference, resource utilization, and cost efficiency. Research & Innovation: Stay abreast of the latest advancements in LLMs, NLP, and machine learning research. Proactively explore and propose new technologies and approaches to enhance our AI capabilities. Collaboration: Work closely with cross-functional teams including data scientists, software engineers, product managers, and researchers to deliver impactful AI solutions. Documentation: Create clear and concise documentation for models, code, and deployment procedures. Required Qualifications: Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Machine Learning, Computational Linguistics, or a related quantitative field. 2-7 years of professional experience in a role focused on Machine Learning, Natural Language Processing, or AI development. Strong proficiency in Python and relevant ML/DL frameworks (e.g., TensorFlow, PyTorch, Hugging Face Transformers). Hands-on experience with Large Language Models (LLMs) , including familiarity with their architectures (e.g., Transformers) and practical application. Experience with prompt engineering techniques and strategies. Solid understanding of NLP concepts, including text pre-processing, embeddings, semantic search, and information retrieval. Familiarity with cloud platforms (AWS, GCP, Azure) and their AI/ML services. Experience with version control systems (e.g., Git). Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication and interpersonal skills to articulate complex technical concepts to both technical and non-technical audiences. Preferred Qualifications (Bonus Points for): Experience with fine-tuning LLMs on custom datasets. Familiarity with MLOps tools and practices (e.g., MLflow, Kubeflow, Docker, Kubernetes). Experience with vector databases (e.g., Pinecone, Weaviate, Milvus) for RAG applications. Knowledge of various retrieval techniques for Retrieval Augmented Generation (RAG) systems. Understanding of ethical AI principles, bias detection, and fairness in LLMs. Contributions to open-source projects or relevant publications. Experience with distributed computing frameworks.

Posted 5 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Backend & MLOps Engineer – Integration, API, and Infrastructure Expert 1.⁠ ⁠Role Objective: Responsible for building robust backend infrastructure, managing ML operations, and creating scalable APIs for AI applications. Must excel in deploying and maintaining AI products in production environments with high availability and security standards. The engineer will be expected to build secure, scalable backend systems that integrate AI models into services (REST, gRPC), manage data pipelines, enable model versioning, and deploy containerized applications in secure (air-gapped) Naval infrastructure. 2.⁠ ⁠Key Responsibilities: 2.1. Create RESTful and/or gRPC APIs for model services. 2.2. Containerize AI applications and maintain Kubernetes-compatible Docker images. 2.3. Develop CI/CD pipelines for model training and deployment. 2.4. Integrate models as microservices using TorchServe, Triton, or FastAPI. 2.5. Implement observability (metrics, logs, alerts) for deployed AI pipelines. 2.6. Build secured data ingestion and processing workflows (ETL/ELT). 2.7. Optimize deployments for CPU/GPU performance, power efficiency, and memory usage 3.⁠ ⁠Educational Qualifications Essential Requirements: 3.1. B.Tech/ M.Tech in Computer Science, Information Technology, or Software Engineering. 3.2. Strong foundation in distributed systems, databases, and cloud computing. 3.3. Minimum 70% marks or 7.5 CGPA in relevant disciplines. Professional Certifications: 3.4. AWS Solutions Architect/DevOps Engineer Professional 3.5. Google Cloud Professional ML Engineer or DevOps Engineer 3.6. Azure AI Engineer or DevOps Engineer Expert. 3.7. Kubernetes Administrator (CKA) or Developer (CKAD). 3.8. Docker Certified Associate Core Skills & Tools 4.⁠ ⁠Backend Development: 4.1. Languages: Python, FastAPI, Flask, Go, Java, Node.js, Rust (for performance-critical components) 4.2. Web Frameworks: FastAPI, Django, Flask, Spring Boot, Express.js. 4.3. API Development: RESTful APIs, GraphQL, gRPC, WebSocket connections. 4.4. Authentication & Security: OAuth 2.0, JWT, API rate limiting, encryption protocols. 5.⁠ ⁠MLOps & Model Management: 5.1. ML Platforms: MLflow, Kubeflow, Apache Airflow, Prefect 5.2. Model Serving: TensorFlow Serving, TorchServe, ONNX Runtime, NVIDIA Triton, BentoML 5.3. Experiment Tracking: Weights & Biases, Neptune, ClearML 5.4. Feature Stores: Feast, Tecton, Amazon SageMaker Feature Store 5.5. Model Monitoring: Evidently AI, Arize, Fiddler, custom monitoring solutions 6.⁠ ⁠Infrastructure & DevOps: 6.1. Containerization: Docker, Podman, container optimization. 6.2. Orchestration: Kubernetes, Docker Swarm, OpenShift. 6.3. Cloud Platforms: AWS, Google Cloud, Azure (multi-cloud expertise preferred). 6.4. Infrastructure as Code: Terraform, CloudFormation, Pulumi, Ansible. 6.5. CI/CD: Jenkins, GitLab CI, GitHub Actions, ArgoCD. 6.6. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins. 7.⁠ ⁠Database & Storage: 7.1. Relational: PostgreSQL, MySQL, Oracle (for enterprise applications) 7.2. NoSQL: MongoDB, Cassandra, Redis, Elasticsearch 7.3. Vector Databases: Pinecone, Weaviate, Chroma, Milvus 7.4. Data Lakes: Apache Spark, Hadoop, Delta Lake, Apache Iceberg 7.5. Object Storage: AWS S3, Google Cloud Storage, MinIO 7.6. Backend: Python (FastAPI, Flask), Node.js (optional) 7.7. DevOps & Infra: Docker, Kubernetes, NGINX, GitHub Actions, Jenkins 8.⁠ ⁠Secure Deployment: 8.1. Military-grade security protocols and compliance 8.2. Air-gapped deployment capabilities 8.3. Encrypted data transmission and storage 8.4. Role-based access control (RBAC) & IDAM integration 8.5. Audit logging and compliance reporting 9.⁠ ⁠Edge Computing: 9.1. Deployment on naval vessels with air gapped connectivity. 9.2. Optimization of applications for resource-constrained environment. 10.⁠ ⁠High Availability Systems: 10.1. Mission-critical system design with 99.9% uptime. 10.2. Disaster recovery and backup strategies. 10.3. Load balancing and auto-scaling. 10.4. Failover mechanisms for critical operations. 11.⁠ ⁠Cross-Compatibility Requirements: 11.1. Define and expose APIs in a documented, frontend-consumable format (Swagger/OpenAPI). 11.2. Develop model loaders for AI Engineer's ONNX/ serialized models. 11.3. Provide UI developers with test environments, mock data, and endpoints. 11.4. Support frontend debugging, edge deployment bundling, and user role enforcement. 12.⁠ ⁠Experience Requirements 12.1. Production experience with cloud platforms and containerization. 12.2. Experience building and maintaining APIs serving millions of requests. 12.3. Knowledge of database optimization and performance tuning. 12.4. Experience with monitoring and alerting systems. 12.5. Architected and deployed large-scale distributed systems. 12.6. Led infrastructure migration or modernization projects. 12.7. Experience with multi-region deployments and disaster recovery. 12.8. Track record of optimizing system performance and cost

Posted 5 days ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Gruve Gruve is an innovative software services startup dedicated to transforming enterprises to AI powerhouses. We specialize in cybersecurity, customer experience, cloud infrastructure, and advanced technologies such as Large Language Models (LLMs). Our mission is to assist our customers in their business strategies utilizing their data to make more intelligent decisions. As a well-funded early-stage startup, Gruve offers a dynamic environment with strong customer and partner networks. Position Summary We are looking for a dynamic Engineering Manager with a strong technical foundation in both AI/ML and networking technologies. This role demands a leader who can manage and mentor a team of engineers, drive execution on technically complex projects, and interface directly with clients and internal stakeholders to translate requirements into impactful solutions. Key Responsibilities Lead a team of engineers working on AI-driven solutions in the networking domain. Lead a team of engineers working on AI-driven solutions in the networking and network security domain Act as a hands-on technical leader guiding design, architecture, and implementation. Work closely with customers, solution architects, and delivery teams to understand requirements and propose solutions. Collaborate with clients' product managers, architects, and engineering stakeholders to align on technical direction, scope, and roadmap. Drive execution of AI-based automation, analytics, and optimization use cases in networking projects. Ensure timely delivery of high-quality software aligned with client expectations. Stay abreast of industry trends in AI and networking to identify innovation opportunities. Foster a collaborative and performance-driven culture within the team. Lead internal product accelerators and contribute to reusable solution assets and frameworks Drive engineering best practices: Agile/Scrum, CI/CD, code quality, design reviews, architecture decision records. Oversee hiring and onboarding of engineers and work closely with recruitment to build high-calibre teams. Identify and manage risks proactively—technical, resourcing, or delivery-related. Coach, mentor, and grow engineering leads and senior developers; foster a culture of learning, collaboration, and ownership Basic Qualifications 8+ years of experience in engineering roles, with at least 2 years in a team leadership or engineering management capacity. 12+ with at least 4 yrs in team leadership or engineering management capacity Proven experience in projects at the intersection of AI/ML and networking technologies. Deep understanding of network communication protocols (e.g., TCP/IP, SNMP, BGP, OSPF). Practical knowledge of AI/ML tools and techniques (e.g., Python, TensorFlow/PyTorch, Scikit-learn). Strong problem-solving and analytical skills with a hands-on mindset. Proven experience managing multiple product engineering teams in a client-facing services environment Excellent communication and stakeholder engagement skills. Knowledge of cloud-native platforms (AWS, Azure, GCP), containerization, and microservices Preferred Qualifications Experience working with OEMs, MSPs, or service providers in the networking domain. Understanding of SDN, NFV, or cloud networking architectures (AWS/GCP/Azure). Exposure to AI/ML Ops tools (e.g., MLflow, Kubeflow, SageMaker). Ability to manage cross-functional teams and lead delivery across multiple projects. Excellent communication and stakeholder engagement skills. Why Gruve At Gruve, we foster a culture of innovation, collaboration, and continuous learning. We are committed to building a diverse and inclusive workplace where everyone can thrive and contribute their best work. If you’re passionate about technology and eager to make an impact, we’d love to hear from you. Gruve is an equal opportunity employer. We welcome applicants from all backgrounds and thank all who apply; however, only those selected for an interview will be contacted.

Posted 5 days ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Title: Technical Project Manager – Full Stack Location : Bengaluru, India Experience : 8+ Years in Full Stack Development, 2+ Years in Architecture/Project Management Employment Type : Full-time Company Overview: IAI Solution Pvt Ltd (www.iaisolution.com),operates at the edge of applied AI where foundational research meets real-world deployment. We craft intelligent systems that think in teams, adapt with context, and deliver actionable insight across domains. We are seeking a Technical Project Manager who thrives in high-velocity environments, enjoys technical problem-solving, and is passionate about building scalable and impactful systems. Position Summary: We are hiring a Technical Project Manager (TPM ) who began their career as a Full Stack Developer (JavaScript, Java, Python, Spring Boot) , progressed to a Technical Lead and has since grown into a Project and Solution Delivery Leader . This person must have strong technical grounding, cloud architecture expertise, and demonstrated success in managing software projects end to end. Experience in a startup environment is preferred, where agility, ownership, and cross-functional collaboration are key. Key Responsibilities Lead software projects from planning through execution and final delivery. Translate business and product goals into Technical Implementation Roadmaps . Coordinate delivery across frontend and backend teams working in Javascript, Java, Python, Spring Boot, and related stacks. Architect and oversee deployments using Azure/AWS/GCP . Handle CI/CD pipelines , infrastructure automation, and cloud-native development using Docker, Kubernetes, Terraform, and GitHub Actions . Manage project timelines, resource planning, and risk mitigation. Work closely with Stakeholders to ensure delivery meets expectations. Maintain focus on Security, Scalability and Operational excellence . Must-Have Qualifications 8+ years of total experience in software engineering. Experience as a Full Stack Developer with JavaScript, Java, Python, and Spring Boot. 2+ years in a Technical Project Manager or Technical Lead role. Exposure to Cloud and Solution Architecture (Azure preferred). Proficiency in managing technical teams and cross-functional delivery. Strong communication and collaboration skills. Familiarity with Agile Project Management using Jira . Startup experience preferred – ability to manage ambiguity, rapid iterations, and hands-on leadership. Technical Stack Frontend: React.js, Next.js Backend : Python, FastAPI, Django, Spring Boot, Node.js DevOps & Cloud: Azure (preferred), AWS, Docker, Kubernetes, Terraform CI/CD: GitHub Actions, Azure DevOps Databases: PostgreSQL, MongoDB, Redis Messaging: Kafka, RabbitMQ, Azure Service Bus Monitoring: Prometheus, Grafana, ELK Stack Good-to-Have Skills & Certifications Exposure to AI/ML projects and MLOps tools like MLflow or Kubeflow Experience with microservices, performance tuning and frontend optimization Certifications: PMP, CSM, CSPO, SAP Activate, PRINCE2, AgilePM, ITIL Perks & Benefits Competitive compensation with performance incentives High-impact role in a product-driven, fast-moving environment Opportunity to lead mission-critical software and AI initiatives Flexible work culture, learning support, and health benefits Job Type: Full-time Pay: Up to ₹3,200,000.00 per year Benefits: Health insurance Paid sick time Provident Fund Schedule: Day shift Fixed shift Monday to Friday Morning shift Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Bengalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Current CTC (in Lakhs)? Expected CTC (in Lakhs)? Notice Period (in Days)? Current Location : Experience: Software development: 4 years (Required) Location: Bengalore, Karnataka (Required) Work Location: In person Speak with the employer +91 9003562294

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

On-site

About the Role We are seeking a visionary and technically astute Lead AI Architect to lead the architecture and design of scalable AI systems and next-generation intelligent platforms. As a core member of the leadership team, you will be responsible for driving end-to-end architectural strategy, model optimization, and AI infrastructure that powers mission-critical solutions across our product lines. This is a foundational role for someone passionate about architecting solutions involving RAG , SLMs/LLMs , multi-agent systems , and scalable model pipelines across cloud-native environments. Salary 30 - 45 LPA with additional benefits. Key Responsibilities Define and own the AI/ML architectural roadmap , aligning with product vision and technical goals. Architect and oversee implementation of RAG-based solutions , LLM/SLM fine-tuning pipelines , and multi-agent orchestration . Lead design of model training and inference pipelines ensuring scalability, modularity, and observability. Evaluate and select open-source and proprietary foundation models for fine-tuning, instruction tuning, and domain adaptation. Guide integration of vector databases, semantic search, and prompt orchestration frameworks (LangChain, LlamaIndex, etc.). Ensure best practices in model deployment, versioning, monitoring , and performance optimization (GPU utilization, memory efficiency, etc.). Collaborate with Engineering, DevOps, Product, and Data Science teams to bring AI features to production. Mentor mid-level engineers and interns; contribute to technical leadership and code quality . Maintain awareness of latest research, model capabilities, and trends in AI. Required Skills & Qualifications 6+ years of hands-on experience in AI/ML architecture and model deployment. Expert-level knowledge of Python and libraries such as PyTorch, Hugging Face Transformers, scikit-learn, and FastAPI. Deep understanding of LLMs/SLMs, embedding models, tokenization strategies, fine-tuning, quantization, and LoRA/QLoRA. Proven experience with Retrieval-Augmented Generation (RAG) pipelines and vector DBs like FAISS, Pinecone, or Weaviate. Strong grasp of system design, distributed training, MLOps, and scalable cloud-based infrastructure (AWS/GCP/Azure). Experience with containerization (Docker), orchestration (Kubernetes), and experiment tracking (MLFlow, W&B). Experience in building secure and performant REST APIs , deploying and monitoring AI services in production. Nice to Have Exposure to multi-agent frameworks, task planners, or LangGraph. Experience leading AI platform teams or architecting enterprise-scale ML platforms. Familiarity with Data Governance, Responsible AI, and model compliance requirements. Published papers, open-source contributions, or patents in the AI/ML domain. Why Join Us Be at the forefront of innovation in AI and language intelligence. Influence strategic technical decisions and drive company-wide AI architecture. Lead a growing AI team in a high-impact, fast-paced environment. Competitive compensation, equity options, and leadership opportunity.

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

Remote

About the Role We are seeking a hands-on AI/ML Engineer with deep expertise in Retrieval-Augmented Generation (RAG) agents , Small Language Model (SLM) fine-tuning , and custom dataset workflows . You'll work closely with our AI research and product teams to build production-grade models, deploy APIs, and enable next-gen AI-powered experiences. Key Responsibilities Design and build RAG-based solutions using vector databases and semantic search. Fine-tune open-source SLMs (e.g., Mistral, LLaMA, Phi, etc.) on custom datasets. Develop robust training and evaluation pipelines with reproducibility. Create and expose REST APIs for model inference using FastAPI . Build lightweight frontends or internal demos with Streamlit for rapid validation. Analyze model performance and iterate quickly on experiments. Document processes and contribute to knowledge-sharing within the team. Must-Have Skills 3–5 years of experience in applied ML/AI engineering roles. Expert in Python and common AI frameworks (Transformers, PyTorch/TensorFlow). Deep understanding of RAG architecture, vector stores (FAISS, Pinecone, Weaviate). Experience with fine-tuning transformer models and instruction-tuned SLMs. Proficient with FastAPI for backend API deployment and Streamlit for prototyping. Knowledge of tokenization, embeddings, training loops, and evaluation metrics. Nice to Have Familiarity with LangChain, Hugging Face ecosystem, and OpenAI APIs. Experience with Docker, GitHub Actions, and cloud model deployment (AWS/GCP/Azure). Exposure to experiment tracking tools like MLFlow, Weights & Biases. What We Offer Build core tech for next-gen AI products with real-world impact. Autonomy and ownership in shaping AI components from research to production. Competitive salary, flexible remote work policy, and a growth-driven environment.

Posted 5 days ago

Apply

0 years

0 Lacs

India

Remote

Location: Remote | CTC: 5,00,000-7,00,000 LPA + ESOPs About Us : We're on a mission to revolutionize sports analytics by integrating cutting-edge AI into real-time and post-match video analysis. Our platform delivers powerful insights to enhance athlete performance, optimize coaching strategies, and engage fans like never before. Join us at the intersection of technology and athletic excellence. Role Overview : As an AI Sports Analyst / Computer Vision Engineer , you'll play a pivotal role in our technology team, developing intelligent systems that analyze sports video footage to uncover performance metrics, tactical patterns, and game-changing insights. Your innovations will help athletes train smarter, enable coaches to make data-driven decisions, and transform how fans connect with the game — making a real impact on the future of sports. Job Responsibilities As our AI Engineer, you will: ✅ Build and optimize computer vision models to detect and track players, ball, referee, and other key entities in sports footage ✅ Develop event recognition models ✅ Implement pose estimation and action recognition algorithms for performance analytics ✅ Work with multi-camera setups to perform 3D player reconstruction and spatial analysis ✅ Deploy models in real-time pipelines and optimize for low-latency inference ✅ Integrate AI pipelines with front-end visualization dashboards ✅ Collaborate with frontend/backend devs, data annotators, and sports analysts ✅ Stay up to date with state-of-the-art research in vision-based sports analytics Preferred Skills : ⏱️ Real-time inference pipeline design ⚽ Prior work on sports 🖥️ Familiarity with multi-view 3D reconstruction 📍 Pose estimation or skeletal keypoint tracking 🧪 Background in Reinforcement Learning for player movement prediction (optional) 💡 Research publications or GitHub repos in sports AI 📦 Experience with MLOps tools: Weights & Biases, MLflow, DVC

Posted 5 days ago

Apply

40.0 years

6 - 8 Lacs

Hyderābād

On-site

ABOUT AMGEN Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller, and longer. We discover, develop, manufacture, and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what is known today. ABOUT THE ROLE Amgen is seeking a Sr. Associate IS Business Systems Analyst with strong data science and analytics expertise to join the Digital Workplace Experience (DWX) Automation & Analytics product team. In this role, you will develop, maintain, and optimize machine learning models, forecasting tools, and operational dashboards that support strategic and day-to-day decisions for global digital workplace services. This role is ideal for candidates with hands-on experience building predictive models and working with large operational datasets to uncover insights and deliver automation solutions. You will work alongside product owners, engineers, and service leads to deliver measurable business value using data-driven tools and techniques. Roles and Responsibilities Design, develop, and maintain predictive models, decision support tools, and dashboards using Python, R, SQL, Power BI, or similar platforms. Partner with delivery teams to embed data science outputs into business operations, focusing on improving efficiency, reliability, and end-user experience in Digital Workplace services. Build and automate data pipelines for data ingestion, cleansing, transformation, and model training using structured and unstructured datasets. Monitor, maintain, and tune models to ensure accuracy, interpretability, and sustained business impact. Support efforts to operationalize ML models by working with data engineers and platform teams on integration and automation. Conduct data exploration, hypothesis testing, and statistical analysis to identify optimization opportunities across services like endpoint health, service desk operations, mobile technology, and collaboration platforms. Provide ad hoc and recurring data-driven recommendations to improve automation performance, service delivery, and capacity forecasting. Develop reusable components, templates, and frameworks that support analytics and automation scalability across DWX. Collaborate with other data scientists, analysts, and developers to implement best practices in model development and lifecycle management. What we expect of you We are all different, yet we all use our outstanding contributions to serve patients. The vital attribute professional we seek is with these qualifications. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years in Data Science, Computer Science, IT, or related field Must Have Skills Experience working with large-scale datasets in enterprise environments and with data visualization tools such as Power BI, Tableau, or equivalent Strong experience developing models in Python or R for regression, classification, clustering, forecasting, or anomaly detection Proficiency in SQL and working with relational and non-relational data sources Nice-to-Have Skills Familiarity with ML pipelines, version control (e.g., Git), and model lifecycle tools (MLflow, SageMaker, etc.) Understanding of statistics, data quality, and evaluation metrics for applied machine learning Ability to translate operational questions into structured analysis and model design Experience with cloud platforms (Azure, AWS, GCP) and tools like Databricks, Snowflake, or BigQuery Familiarity with automation tools or scripting (e.g., PowerShell, Bash, Airflow) Working knowledge of Agile/SAFe environments Exposure to ITIL practices or ITSM platforms such as ServiceNow Soft Skills Analytical mindset with attention to detail and data integrity Strong problem-solving and critical thinking skills Ability to work independently and drive tasks to completion Strong collaboration and teamwork skills Adaptability in a fast-paced, evolving environment Clear and concise documentation habits EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 5 days ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

About SAIGroup SAIGroup is a private investment firm that has committed $1 billion to incubate and scale revolutionary AI-powered enterprise software application companies. Our portfolio, a testament to our success, comprises rapidly growing AI companies that collectively cater to over 2,000+ major global customers, approaching $800 million in annual revenue, and employing a global workforce of over 4,000 individuals. SAIGroup invests in new ventures based on breakthrough AI-based products that have the potential to disrupt existing enterprise software markets. SAIGroup’s latest investment, JazzX AI , is a pioneering technology company on a mission to shape the future of work through an AGI platform purpose-built for the enterprise. JazzX AI is not just building another AI tool—it’s reimagining business processes from the ground up, enabling seamless collaboration between humans and intelligent systems. The result is a dramatic leap in productivity, efficiency, and decision velocity, empowering enterprises to become pacesetters who lead their industries and set new benchmarks for innovation and excellence. Job Title: AGI Solutions Engineer (Junior) – GTM Solution Delivery (Full-time Remote-first with periodic travel to client sites & JazzX hubs) Role Overview As an Artificial General Intelligence Engineer you are the hands-on technical force that turns JazzX’s AGI platform into working, measurable solutions for customers. You will: Build and integrate LLM-driven features, vector search pipelines, and tool-calling agents into client environments. Collaborate with solution architects, product, and customer-success teams from discovery through production rollout. Contribute field learnings back to the core platform, accelerating time-to-value across all deployments. You are as comfortable writing production-quality Python as you are debugging Helm charts, and you enjoy explaining your design decisions to both peers and client engineers. Key Responsibilities Focus Area What You’ll Do Solution Implementation Develop and extend JazzX AGI services (LLM orchestration, retrieval-augmented generation, agents) within customer stacks. Integrate data sources, APIs, and auth controls; ensure solutions meet security and compliance requirements. Pair with Solution Architects on design reviews; own component-level decisions. Delivery Lifecycle Drive proofs-of-concept, pilots, and production rollouts with an agile, test-driven mindset. Create reusable deployment scripts (Terraform, Helm, CI/CD) and operational runbooks. Instrument services for observability (tracing, logging, metrics) and participate in on-call rotations. Collaboration & Support Work closely with product and research teams to validate new LLM techniques in real-world workloads. Troubleshoot customer issues, triage bugs, and deliver patches or performance optimisations. Share best practices through code reviews, internal demos, and technical workshops. Innovation & Continuous Learning Evaluate emerging frameworks (e.g., LlamaIndex, AutoGen, WASM inferencing) and pilot promising tools. Contribute to internal knowledge bases and GitHub templates that speed future projects. Qualifications Must-Have 2+ years of professional software engineering experience; 1+ years working with ML or data-intensive systems. Proficiency in Python (or Java/Go) with strong software-engineering fundamentals (testing, code reviews, CI/CD). Hands-on experience deploying containerised services on AWS, GCP, or Azure using Kubernetes & Helm. Practical knowledge of LLM / Gen-AI frameworks (LangChain, LlamaIndex, PyTorch, or TensorFlow) and vector databases. Familiarity integrating REST/GraphQL APIs, streaming platforms (Kafka), and SQL/NoSQL stores. Clear written and verbal communication skills; ability to collaborate with distributed teams. Willingness to travel 10–20 % for key customer engagements. Nice-to-Have Experience delivering RAG or agent-based AI solutions in regulated domains (finance, healthcare, telecom). Cloud or Kubernetes certifications (AWS SA-Assoc/Pro, CKA, CKAD). Exposure to MLOps stacks (Kubeflow, MLflow, Vertex AI) or data-engineering tooling (Airflow, dbt). Attributes Empathy & Ownership: You listen carefully to user needs and take full ownership of delivering great experiences. Startup Mentality: You move fast, learn quickly, and are comfortable wearing many hats. Detail-Oriented Builder: You care about the little things Mission-Driven: You want to solve important, high-impact problems that matter to real people. Team-Oriented: Low ego, collaborative, and excited to build alongside highly capable engineers, designers, and domain experts. Travel This position requires the ability to travel to client sites as needed for on-site deployments and collaboration. Travel is estimated at approximately 20–30% of the time (varying by project), and flexibility is expected to accommodate key client engagement activities. Why Join Us At JazzX AI, you have the opportunity to join the foundational team that is pushing the boundaries of what’s possible to create an autonomous intelligence driven future. We encourage our team to pursue bold ideas, foster continuous learning, and embrace the challenges and rewards that come with building something truly innovative. Your work will directly contribute to pioneering solutions that have the potential to transform industries and redefine how we interact with technology. As an early member of our team, your voice will be pivotal in steering the direction of our projects and culture, offering an unparalleled chance to leave your mark on the future of AI. We offer a competitive salary, equity options, and an attractive benefits package, including health, dental, and vision insurance, flexible working arrangements, and more.

Posted 5 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Applied Machine Learning Scientist – Voice AI, NLP & GenAI Applications Location : Sector 63, Gurugram, Haryana – 100% In-Office Working Days : Monday to Friday, with 2nd and 4th Saturdays off Working Hours : 10:30 AM – 8:00 PM Experience : 3–7 years in applied ML, with at least 2 years focused on voice, NLP, or GenAI deployments Function : AI/ML Research & Engineering | Conversational Intelligence | Real-time Model Deployment Apply : careers@darwix.ai Subject Line : “Application – Applied ML Scientist – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform transforming how enterprise sales, support, and credit teams engage with customers. Our proprietary AI stack ingests data across calls, chat, email, and CCTV streams to generate: Real-time nudges for agents and reps Conversational analytics and scoring to drive performance CCTV-based behavior insights to boost in-store conversion We’re live across leading enterprises in India and MENA, including IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar , and others. We’re backed by top-tier operators and venture investors and scaling rapidly across multiple verticals and geographies. Role Overview We are looking for a hands-on, impact-driven Applied Machine Learning Scientist to build, optimize, and productionize AI models across ASR, NLP, and LLM-driven intelligence layers . This is a core role in our AI/ML team where you’ll be responsible for building the foundational ML capabilities that drive our real-time sales intelligence platform. You will work on large-scale multilingual voice-to-text pipelines, transformer-based intent detection, and retrieval-augmented generation systems used in live enterprise deployments. Key ResponsibilitiesVoice-to-Text (ASR) Engineering Deploy and fine-tune ASR models such as WhisperX, wav2vec 2.0, or DeepSpeech for Indian and GCC languages Integrate diarization and punctuation recovery pipelines Benchmark and improve transcription accuracy across noisy call environments Optimize ASR latency for real-time and batch processing modes NLP & Conversational Intelligence Train and deploy NLP models for sentence classification, intent tagging, sentiment, emotion, and behavioral scoring Build call scoring logic aligned to domain-specific taxonomies (sales pitch, empathy, CTA, etc.) Fine-tune transformers (BERT, RoBERTa, etc.) for multilingual performance Contribute to real-time inference APIs for NLP outputs in live dashboards GenAI & LLM Systems Design and test GenAI prompts for summarization, coaching, and feedback generation Integrate retrieval-augmented generation (RAG) using OpenAI, HuggingFace, or open-source LLMs Collaborate with product and engineering teams to deliver LLM-based features with measurable accuracy and latency metrics Implement prompt tuning, caching, and fallback strategies to ensure system reliability Experimentation & Deployment Own model lifecycle: data preparation, training, evaluation, deployment, monitoring Build reproducible training pipelines using MLflow, DVC, or similar tools Write efficient, well-structured, production-ready code for inference APIs Document experiments and share insights with cross-functional teams Required Qualifications Bachelor’s or Master’s degree in Computer Science, AI, Data Science, or related fields 3–7 years experience applying ML in production, including NLP and/or speech Experience with transformer-based architectures for text or audio (e.g., BERT, Wav2Vec, Whisper) Strong Python skills with experience in PyTorch or TensorFlow Experience with REST APIs, model packaging (FastAPI, Flask, etc.), and containerization (Docker) Familiarity with audio pre-processing, signal enhancement, or feature extraction (MFCC, spectrograms) Knowledge of MLOps tools for experiment tracking, monitoring, and reproducibility Ability to work collaboratively in a fast-paced startup environment Preferred Skills Prior experience working with multilingual datasets (Hindi, Arabic, Tamil, etc.) Knowledge of diarization and speaker separation algorithms Experience with LLM APIs (OpenAI, Cohere, Mistral, LLaMA) and RAG pipelines Familiarity with inference optimization techniques (quantization, ONNX, TorchScript) Contribution to open-source ASR or NLP projects Working knowledge of AWS/GCP/Azure cloud platforms What Success Looks Like Transcription accuracy improvement ≥ 85% across core languages NLP pipelines used in ≥ 80% of Darwix AI’s daily analyzed calls 3–5 LLM-driven product features delivered in the first year Inference latency reduced by 30–50% through model and infra optimization AI features embedded across all Tier 1 customer accounts within 12 months Life at Darwix AI You will be working in a high-velocity product organization where AI is core to our value proposition. You’ll collaborate directly with the founding team and cross-functional leads, have access to enterprise datasets, and work on ML systems that impact large-scale, real-time operations. We value rigor, ownership, and speed. Model ideas become experiments in days, and successful experiments become deployed product features in weeks. Compensation & Perks Competitive fixed salary based on experience Quarterly/Annual performance-linked bonuses ESOP eligibility post 12 months Compute credits and model experimentation environment Health insurance, mental wellness stipend Premium tools and GPU access for model development Learning wallet for certifications, courses, and AI research access Career Path Year 1: Deliver production-grade ASR/NLP/LLM systems for high-usage product modules Year 2: Transition into Senior Applied Scientist or Tech Lead for conversation intelligence Year 3: Grow into Head of Applied AI or Architect-level roles across vertical product lines How to Apply Email the following to careers@darwix.ai : Updated resume (PDF) A short write-up (200 words max): “How would you design and optimize a multilingual voice-to-text and NLP pipeline for noisy call center data in Hindi and English?” Optional: GitHub or portfolio links demonstrating your work Subject Line : “Application – Applied Machine Learning Scientist – [Your Name]”

Posted 6 days ago

Apply

15.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Head of AI & ML Platforms Focus : Voice AI, NLP, Conversation Intelligence for Omnichannel Enterprise Sales Location : Sector 63, Gurugram, Haryana — Full-time, 100% In-Office Work Hours : 10:30 AM – 8:00 PM, Monday to Friday (2nd and 4th Saturdays off) Experience Required : 8–15 years in AI/ML, with 3+ years leading teams in voice, NLP, or conversation platforms Apply : careers@darwix.ai Subject Line : “Application – Head of AI & ML Platforms – [Your Name]” About Darwix AI Darwix AI is a GenAI-powered platform for enterprise revenue teams across sales, support, credit, and retail. Our proprietary AI stack ingests multimodal inputs—voice calls, chat logs, emails, and CCTV streams—and delivers contextual nudges, conversation scoring, and performance analytics in real time. Our suite of products includes: Transform+ : Real-time conversational intelligence for contact centers and field sales Sherpa.ai : A multilingual GenAI assistant that provides in-the-moment coaching, summaries, and objection handling support Store Intel : A computer vision solution that transforms CCTV feeds into actionable insights for physical retail spaces Darwix AI is trusted by large enterprises such as IndiaMart, Wakefit, Emaar, GIVA, Bank Dofar, and Sobha Realty , and is backed by leading institutional and operator investors. We are expanding rapidly across India, the Middle East, and Southeast Asia. Role Overview We are seeking a highly experienced and technically strong Head of AI & ML Platforms to architect and lead the end-to-end AI systems powering our voice intelligence, NLP, and GenAI solutions. This is a leadership role that blends research depth with applied engineering execution. The ideal candidate will have deep experience in building and deploying voice-to-text pipelines, multilingual NLP systems, and production-grade inference workflows. The individual will be responsible for model design, accuracy benchmarking, latency optimization, infrastructure orchestration, and integration across our product suite. This is a critical leadership role with direct influence over product velocity, enterprise client outcomes, and future platform scalability. Key ResponsibilitiesVoice-to-Text (ASR) Architecture Lead the design and optimization of large-scale automatic speech recognition (ASR) pipelines using open-source and commercial frameworks (e.g., WhisperX, Deepgram, AWS Transcribe) Enhance speaker diarization, custom vocabulary accuracy, and latency performance for real-time streaming scenarios Build fallback ASR workflows for offline and batch mode processing Implement multilingual and domain-specific tuning, especially for Indian and GCC languages Natural Language Processing and Conversation Analysis Build NLP models for conversation segmentation, intent detection, tone/sentiment analysis, and call scoring Implement multilingual support (Hindi, Arabic, Tamil, etc.) with fallback strategies for mixed-language and dialectal inputs Develop robust algorithms for real-time classification of sales behaviors (e.g., probing, pitching, objection handling) Train and fine-tune transformer-based models (e.g., BERT, RoBERTa, DeBERTa) and sentence embedding models for text analytics GenAI and LLM Integration Design modular GenAI pipelines for nudging, summarization, and response generation using tools like LangChain, LlamaIndex, and OpenAI APIs Implement retrieval-augmented generation (RAG) architectures for contextual, accurate, and hallucination-resistant outputs Build prompt orchestration frameworks that support real-time sales coaching across channels Ensure safety, reliability, and performance of LLM-driven outputs across use cases Infrastructure and Deployment Lead the development of scalable, secure, and low-latency AI services deployed via FastAPI, TorchServe, or similar frameworks Oversee model versioning, monitoring, and retraining workflows using MLflow, DVC, or other MLOps tools Build hybrid inference systems for batch, real-time, and edge scenarios depending on product usage Optimize inference pipelines for GPU/CPU balance, resource scheduling, and runtime efficiency Team Leadership and Cross-functional Collaboration Recruit, manage, and mentor a team of machine learning engineers and research scientists Collaborate closely with Product, Engineering, and Customer Success to translate product requirements into AI features Own AI roadmap planning, sprint delivery, and KPI measurement Serve as the subject-matter expert for AI-related client discussions, sales demos, and enterprise implementation roadmaps Required Qualifications 8+ years of experience in AI/ML with a minimum of 3 years in voice AI, NLP, or conversational platforms Proven experience delivering production-grade ASR or NLP systems at scale Deep familiarity with Python, PyTorch, HuggingFace, FastAPI, and containerized environments (Docker/Kubernetes) Expertise in fine-tuning LLMs and building multi-language, multi-modal intelligence stacks Demonstrated experience with tools such as WhisperX, Deepgram, Azure Speech, LangChain, MLflow, or Triton Inference Server Experience deploying real-time or near real-time inference models at enterprise scale Strong architectural thinking with the ability to design modular, reusable, and scalable ML services Track record of building and leading high-performing ML teams Preferred Skills Background in telecom, contact center AI, conversational analytics, or field sales optimization Familiarity with GPU deployment, model quantization, and inference optimization Experience with low-resource languages and multilingual data augmentation Understanding of sales enablement workflows and domain-specific ontology development Experience integrating AI models into customer-facing SaaS dashboards and APIs Success Metrics Transcription accuracy improvement by ≥15% across core languages within 6 months End-to-end voice-to-nudge latency reduced below 5 seconds GenAI assistant adoption across 70%+ of eligible conversations AI-driven call scoring rolled out across 100% of Tier 1 clients within 9 months Model deployment velocity (dev to prod) reduced by ≥40% through tooling and process improvements Culture at Darwix AI At Darwix AI, we operate at the intersection of engineering velocity and product clarity. We move fast, prioritize outcomes over optics, and expect leaders to drive hands-on impact. You will work directly with the founding team and senior leaders across engineering, product, and GTM functions. Expect ownership, direct communication, and a culture that values builders who scale systems, people, and strategy. Compensation and Benefits Competitive fixed compensation Performance-based bonuses and growth-linked incentives ESOP eligibility for leadership candidates Access to GPU/compute credits and model experimentation infrastructure Comprehensive medical insurance and wellness programs Dedicated learning and development budget for technical and leadership upskilling MacBook Pro, premium workstation, and access to industry tooling licenses Career Progression 12-month roadmap: Build and stabilize AI platform across all product lines 18–24-month horizon: Elevate to VP of AI or Chief AI Officer as platform scale increases globally Future leadership role in enabling new verticals (e.g., healthcare, finance, logistics) with domain-specific GenAI solutions How to Apply Send the following to careers@darwix.ai : Updated CV (PDF format) A short statement (200 words max) on: “How would you design a multilingual voice-to-text pipeline optimized for low-resource Indic languages, with real-time nudge delivery?” Links to any relevant GitHub repos, publications, or deployed projects (optional) Subject Line : “Application – Head of AI & ML Platforms – [Your Name]”

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Phonologies, a leading provider of speech technology and voice bots in India, is seeking individuals to join the team and revolutionize the delivery of conversational AI and voice bots over the phone. Our innovative solutions are seamlessly integrated with top contact center providers, telephony systems, and CRM systems. We are on the lookout for dynamic and skilled specialists to contribute to the development of our cutting-edge customer interaction solutions. As part of our team, you will be involved in developing and implementing machine learning models for real-world applications. Ideal candidates should possess a minimum of 5 years of experience and demonstrate proficiency in Python, scikit-learn, as well as familiarity with tools such as MLFlow, Airflow, and Docker. You will collaborate with engineering and product teams to create and manage ML pipelines, monitor model performance, and uphold ethical and explainable AI practices. Essential skills for this role include strong capabilities in feature engineering, ownership of the model lifecycle, and effective communication. Please note that recent graduates will not be considered for this position. In our welcoming and professional work environment, your role will offer both challenges and opportunities for growth. The position will be based in Pune. If you believe your qualifications align with our requirements, we encourage you to submit your resume (in .pdf format) to Careers@Phonologies.com. Additionally, kindly include a brief introduction about yourself in the email. We are excited to learn more about you and potentially welcome you to our team!,

Posted 6 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Role Overview We are looking for an experienced Solution Architect AI/ML & Data Engineering to lead the design and delivery of advanced data and AI/ML solutions for our clients. Responsibilities The ideal candidate will have a strong background in end-to-end data architecture, AI lifecycle management, cloud technologies, and emerging Generative AI Responsibilities : Collaborate with clients to understand business requirements and design robust data solutions. Lead the development of end-to-end data pipelines including ingestion, storage, processing, and visualization. Architect scalable, secure, and compliant data systems following industry best practices. Guide data engineers, analysts, and cross-functional teams to ensure timely delivery of solutions. Participate in pre-sales efforts: solution design, proposal creation, and client presentations. Act as a technical liaison between clients and internal teams throughout the project lifecycle. Stay current with emerging technologies in AI/ML, data platforms, and cloud services. Foster long-term client relationships and identify opportunities for business expansion. Understand and architect across the full AI lifecyclefrom ingestion to inference and operations. Provide hands-on guidance for containerization and deployment using Kubernetes. Ensure proper implementation of data governance, modeling, and warehousing : Bachelors or masters degree in computer science, Data Science, or related field. 10+ years of experience as a Data Solution Architect or similar role. Deep technical expertise in data architecture, engineering, and AI/ML systems. Strong experience with Hadoop-based platforms, ideally Cloudera Data Platform or Data Fabric. Proven pre-sales experience: technical presentations, solutioning, and RFP support. Proficiency in cloud platforms (Azure preferred; also, AWS or GCP) and cloud-native data tools. Exposure to Generative AI frameworks and LLMs like OpenAI and Hugging Face. Experience in deploying and managing applications on Kubernetes (AKS, EKS, GKE). Familiarity with data governance, data modeling, and large-scale data warehousing. Excellent problem-solving, communication, and client-facing & Technology Architecture & Engineering: Hadoop Ecosystem: Cloudera Data Platform, Data Fabric, HDFS, Hive, Spark, HBase, Oozie. ETL & Integration: Apache NiFi, Talend, Informatica, Azure Data Factory, AWS Glue. Warehousing: Azure Synapse, Redshift, BigQuery, Snowflake, Teradata, Vertica. Streaming: Apache Kafka, Azure Event Hubs, AWS Platforms: Azure (preferred), AWS, GCP. Data Lakes: ADLS, AWS S3, Google Cloud Platforms: Data Fabric, AI Essentials, Unified Analytics, MLDM, MLDE. AI/ML & GenAI Lifecycle Tools: MLflow, Kubeflow, Azure ML, SageMaker, Ray. Inference: TensorFlow Serving, KServe, Seldon. Generative AI: Hugging Face, LangChain, OpenAI API (GPT-4, etc. DevOps & Deployment Kubernetes: AKS, EKS, GKE, Open Source K8s, Helm. CI/CD: Jenkins, GitHub Actions, GitLab CI, Azure DevOps. (ref:hirist.tech)

Posted 6 days ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an AI/ML Engineer, your primary responsibility will be to collaborate effectively with cross-functional teams, including data scientists and product managers. Together, you will work on acquiring, processing, and managing data for the integration and optimization of AI/ML models. Your role will involve designing and implementing robust, scalable data pipelines to support cutting-edge AI/ML models. Additionally, you will be responsible for debugging, optimizing, and enhancing machine learning models to ensure quality assurance and performance improvements. Operating container orchestration platforms like Kubernetes with advanced configurations and service mesh implementations for scalable ML workload deployments will be a key part of your job. You will also design and build scalable LLM inference architectures, employing GPU memory optimization techniques and model quantization for efficient deployment. Engaging in advanced prompt engineering and fine-tuning of large language models (LLMs) will be crucial, with a focus on semantic retrieval and chatbot development. Documentation will be an essential aspect of your work, involving the recording of model architectures, hyperparameter optimization experiments, and validation results using version control and experiment tracking tools like MLflow or DVC. Researching and implementing cutting-edge LLM optimization techniques such as quantization and knowledge distillation will be part of your ongoing tasks to ensure efficient model performance and reduced computational costs. Collaborating closely with stakeholders to develop innovative natural language processing solutions, with a specialization in text classification, sentiment analysis, and topic modeling, will be another significant aspect of your role. Staying up-to-date with industry trends and advancements in AI technologies and integrating new methodologies and frameworks to continually enhance the AI engineering function will also be expected of you. In terms of qualifications, a Bachelor's degree in any Engineering stream is required, along with a minimum of 4+ years of relevant experience in AI. Proficiency in Python with expertise in data science libraries (NumPy, Pandas, scikit-learn) and deep learning frameworks (PyTorch, TensorFlow) is essential. Experience with LLM frameworks, big data processing using Spark, version control, and experiment tracking, as well as proficiency in software engineering and development, DevOps, infrastructure, cloud services, and LLM project experience are also necessary. Your expertise should include a strong mathematical foundation in statistics, probability, linear algebra, and optimization, along with a deep understanding of ML and LLM development lifecycle. Additionally, you should have expertise in feature engineering, embedding optimization, dimensionality reduction, A/B testing, experimental design, statistical hypothesis testing, RAG systems, vector databases, semantic search implementation, and LLM optimization techniques. Strong analytical thinking, excellent communication skills, experience translating business requirements into data science solutions, project management skills, collaboration abilities, dedication to staying current with the latest ML research, and the ability to mentor and share knowledge with team members are essential competencies for this role.,

Posted 6 days ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Specialist in Software Development (Artificial Intelligence) at Accelya, you will lead the design, development, and implementation of AI and machine learning solutions to tackle complex business challenges. Your expertise in AI algorithms, model development, and software engineering best practices will be crucial in working with cross-functional teams to deliver intelligent systems that optimize business operations and decision-making. Your responsibilities will include designing and developing AI-driven applications and platforms using machine learning, deep learning, and NLP techniques. You will lead the implementation of advanced algorithms for supervised and unsupervised learning, reinforcement learning, and computer vision. Additionally, you will develop scalable AI models, integrate them into software applications, and build APIs and microservices for deployment in cloud environments or on-premise systems. Collaboration with data scientists and data engineers will be essential in gathering, preprocessing, and analyzing large datasets. You will also implement feature engineering techniques to enhance the accuracy and performance of machine learning models. Regular evaluation of AI models using performance metrics and fine-tuning them for optimal accuracy will be part of your role. Furthermore, you will collaborate with business stakeholders to identify AI adoption opportunities, provide technical leadership and mentorship to junior team members, and stay updated with the latest AI trends and research to introduce innovative techniques to the team. Ensuring ethical compliance, security, and continuous improvement of AI systems will also be key aspects of your role. You should hold a Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related field, along with at least 5 years of experience in software development focusing on AI and machine learning. Proficiency in AI frameworks and libraries, programming languages such as Python, R, or Java, and cloud platforms for deploying AI models is required. Familiarity with Agile methodologies, data structures, and databases is essential. Preferred qualifications include a Master's or PhD in Artificial Intelligence or Machine Learning, experience with NLP techniques and computer vision technologies, and certifications in AI/ML or cloud platforms. Accelya is looking for individuals who are passionate about shaping the future of the air transport industry through innovative AI solutions. If you are ready to contribute your expertise and drive continuous improvement in AI systems, this role offers you the opportunity to make a significant impact in the industry.,

Posted 6 days ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As an Applied AI/GenAI ML Director within the Asset and Wealth Management Technology Team at JPMorgan Chase, you will provide deep engineering expertise and work across agile teams to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. You will leverage your deep expertise to consistently challenge the status quo, innovate for business impact, lead the strategic development behind new and existing products and technology portfolios, and remain at the forefront of industry trends, best practices, and technological advances. This role will focus on establishing and nurturing common capabilities, best practices, and reusable frameworks, creating a foundation for AI excellence that accelerates innovation and consistency across business functions. Your responsibilities will include establishing and promoting a library of common ML assets, including reusable ML models, features stores, data pipelines, and standardized templates. You will lead efforts to create shared tools and platforms that streamline the end-to-end ML lifecycle across the organization. Additionally, you will create curative solutions using GenAI workflows through advanced proficiency in large language models (LLMs) and related techniques, and gain experience with creating a Generative AI evaluation and feedback loop for GenAI/ML pipelines. You will advise on the strategy and development of multiple products, applications, and technologies, serving as a lead advisor on the technical feasibility and business need for AIML use cases. Furthermore, you will liaise with firm-wide AI ML stakeholders, translating highly complex technical issues, trends, and approaches to leadership to drive the firm's innovation and enable leaders to make strategic, well-informed decisions about technology advancements. You will also influence across business, product, and technology teams and successfully manage senior stakeholder relationships, championing the firm's culture of diversity, opportunity, inclusion, and respect. To be successful in this role, you must have formal training or certification on Machine Learning concepts and at least 10 years of applied experience, along with 5+ years of experience leading technologists to manage, anticipate, and solve complex technical items within your domain of expertise. An MS and/or PhD in Computer Science, Machine Learning, or a related field is required, as well as at least 10 years of experience in one of the programming languages like Python, Java, C/C++, etc., with intermediate Python skills being a must. You should have a solid understanding of using ML techniques, especially in Natural Language Processing (NLP) and Large Language Models (LLMs), hands-on experience with machine learning and deep learning methods, and the ability to work on system design from ideation through completion with limited supervision. Practical cloud-native experience such as AWS is necessary, along with good communication skills, a passion for detail and follow-through, and the ability to work effectively with engineers, product managers, and other ML practitioners. Preferred qualifications for this role include experience with Ray, MLFlow, and/or other distributed training frameworks, in-depth understanding of Embedding based Search/Ranking, Recommender systems, Graph techniques, and other advanced methodologies, advanced knowledge in Reinforcement Learning or Meta Learning, and a deep understanding of Large Language Model (LLM) techniques, including Agents, Planning, Reasoning, and other related methods. Experience with building and deploying ML models on cloud platforms such as AWS and AWS tools like Sagemaker, EKS, etc., is also desirable.,

Posted 6 days ago

Apply

8.0 - 12.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining Salesforce, the Customer Company, known for inspiring the future of business by combining AI, data, and CRM technologies. As part of the Marketing AI/ML Algorithms and Applications team, you will play a crucial role in enhancing Salesforce's marketing initiatives by implementing cutting-edge machine learning solutions. Your work will directly impact the effectiveness of marketing efforts, contributing to Salesforce's growth and innovation in the CRM and Agentic enterprise space. In the position of Lead / Staff Machine Learning Engineer, you will be responsible for developing and deploying ML model pipelines that drive marketing performance and deliver customer value. Working closely with cross-functional teams, you will lead the design, implementation, and operations of end-to-end ML solutions at scale. Your role will involve establishing best practices, mentoring junior engineers, and ensuring the team remains at the forefront of ML innovation. Key Responsibilities: - Define and drive the technical ML strategy, emphasizing robust model architectures and MLOps practices - Lead end-to-end ML pipeline development, focusing on automated retraining workflows and model optimization - Implement infrastructure-as-code, CI/CD pipelines, and MLOps automation for model monitoring and drift detection - Own the MLOps lifecycle, including model governance, testing standards, and incident response for production ML systems - Establish engineering standards for model deployment, testing, version control, and code quality - Design and implement monitoring solutions for model performance, data quality, and system health - Collaborate with cross-functional teams to deliver scalable ML solutions with measurable impact - Provide technical leadership in ML engineering best practices and mentor junior engineers in MLOps principles Position Requirements: - 8+ years of experience in building and deploying ML model pipelines with a focus on marketing - Expertise in AWS services, particularly SageMaker and MLflow, for ML experiment tracking and lifecycle management - Proficiency in containerization, workflow orchestration, Python programming, ML frameworks, and software engineering best practices - Experience with MLOps practices, feature engineering, feature store implementations, and big data technologies - Track record of leading ML initiatives with measurable marketing impact and strong collaboration skills Join us at Salesforce to drive transformative business impact and shape the future of customer engagement through innovative AI solutions.,

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies