Jobs
Interviews

1552 Sagemaker Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: mongodb,pytorch,aws,javascript,python,azure,llms and modern nlp techniques,computer vision,chromadb,tensorflow,docker,scikit-learn,mlops,ml ops,kubernetes,ml, ai,nlp,llm,software/data engineering,python, pytorch/tensorflow, and scikit-learn,pinecone,postgresql,machine learning,llms,llm technologies,sql,devops

Posted 2 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Job Type: Full-Time Experience Required: 8+ years in Software QA/Testing, 3+ years in Test Automation using Playwright, 2+ years in AI/ML project environments --- About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. --- Key Responsibilities: Test Automation Framework Design & Implementation · Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. · Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). · Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy · Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. · Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. · Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. · Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage · Lead the implementation of end-to-end automation for: o Web interfaces (React, Angular, or other SPA frameworks) o Backend services (REST, GraphQL, WebSockets) o ML model integration endpoints (real-time inference APIs, batch pipelines) · Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration · Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. · Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. · Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership · Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. · Lead and mentor a team of automation and QA engineers across multiple projects. · Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration · Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. · Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. · Review feature specs, AI/ML model update notes, and data schemas for impact analysis. --- Required Skills and Qualifications: Technical Skills: · Strong hands-on expertise with Playwright (TypeScript/JavaScript). · Experience building custom automation frameworks and utilities from scratch. · Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. · Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). · Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). · Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: · Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. · Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. · Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: · Proven experience leading QA/Automation teams (4+ engineers). · Strong documentation, code review, and stakeholder communication skills. · Experience collaborating in Agile/SAFe environments with cross-functional teams. --- Preferred Qualifications: · Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. · Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. · Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. · Experience with GraphQL, Kafka, or event-driven architecture testing. · QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). --- Education: · Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. · Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. --- Why Join Us? · Work on cutting-edge AI platforms shaping the future of [industry/domain]. · Collaborate with world-class AI researchers and engineers. · Drive the quality of products used by [millions of users / high-impact clients]. · Opportunity to define test automation practices for AI—one of the most exciting frontiers in tech.

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

Remote

Company: Atmos Climate Location: Remote Job Type: Full-time About the Job: Join our team as an innovative AI Engineer , where you will design and implement machine learning models and AI solutions. Your expertise will shape the future of intelligent systems for our organization. Skills Required: Machine Learning & Deep Learning (TensorFlow, PyTorch) Python Data Preprocessing & Feature Engineering Cloud AI Services (AWS Sagemaker, Google AI) Natural Language Processing (NLP) Computer Vision Responsibilities: Develop and deploy machine learning models for real-world applications. Collaborate on AI-driven product features. Optimize models for accuracy and efficiency. Handle large datasets for training and testing. Stay ahead with advancements in AI technologies. Qualifications: Bachelor’s/Master’s degree in AI, Machine Learning, or related field. Proven experience in AI/ML development. Strong coding skills in Python. Familiarity with AI frameworks and tools. Excellent research and implementation skills. Join us and be part of our mission to create a better future through innovative technology solutions!

Posted 2 weeks ago

Apply

2.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

This is an WFO opportunity. Please do NOT apply if you are looking for a hybrid or WFH model. This is a Noida-based job. Please do NOT apply unless you are already in the NCR or actively looking to relocate to the NCR. We need a minimum of 2 years of experience. Do NOT apply unless you have a minimum of 2 years of hands-on experience in the job described in the JD below. Position : NLP & Generative AI Engineer Location : Noida Department : AI/ML Employment Type : Full-time About Gigaforce Gigaforce is a California-based InsurTech company delivering a next-generation, SaaS-based claims platform purpose-built for the Property and Casualty industry. Our blockchain-optimized solution integrates artificial intelligence (AI)-powered predictive models with deep domain expertise to streamline and accelerate subrogation and claims processing. Whether for insurers, recovery vendors, or other ecosystem participants, Gigaforce transforms the traditionally fragmented claims lifecycle into an intelligent, end-to-end digital experience. Recognized as one of the most promising emerging players in the insurance technology space, Gigaforce has already achieved significant milestones. We were a finalist for InsurtechNY, a leading platform accelerating innovation in the insurance industry, and twice named a Top 50 company by the TiE Silicon Valley community. Additionally, Plug and Play Tech Center, the worlds largest early-stage investor and innovation accelerator, selected Gigaforce to join its prestigious global accelerator headquartered in Sunnyvale, California. At the core of our platform is a commitment to cutting-edge innovation. We harness the power of technologies such as AI, Machine Learning, Robotic Process Automation, Blockchain, Big Data, and Cloud Computingleveraging modern languages and frameworks like Java, Kotlin, Angular, and Node.js. We are driven by a culture of curiosity, excellence, and inclusion. At Gigaforce, we hire top talent and provide an environment where every voice matters and every idea is valued. Our employees enjoy comprehensive medical benefits, equity participation, meal cards and generous paid time off. As an equal opportunity employer, we are proud to foster a diverse, equitable, and inclusive workplace that empowers all team members to thrive. Were seeking a NLP & Generative AI Engineers with 2-8 years of hands-on experience in traditional machine learning, natural language processing, and modern generative AI techniques. If you have experience deploying GenAI solutions to production, working with open-source technologies, and handling document-centric pipelines, this is the role for you. Youll work in a high-impact role, leading the design, development, and deployment of innovative AI/ML solutions for insurance claims processing and beyond. In this agile environment, you'll work within structured sprints and leverage data-driven insights and user feedback to guide decision-making. You'll balance strategic vision with tactical execution to ensure we continue to lead the industry in subrogation automation and claims optimization for the property and casualty insurance market. Key Responsibilities Build and deploy end-to-end NLP and GenAI-driven products focused on document understanding, summarization, classification, and retrieval. Design and implement models leveraging LLMs (e.g., GPT, T5, BERT) with capabilities like fine-tuning, instruction tuning, and prompt engineering. Work on scalable, cloud-based pipelines for training, serving, and monitoring models. Handle unstructured data from insurance-related documents such as claims, legal texts, and contracts. Collaborate cross-functionally with data scientists, ML engineers, product managers, and developers. Utilize and contribute to open-source tools and frameworks in the ML ecosystem. Deploy production-ready solutions using MLOps practices : Docker, Kubernetes, Airflow, MLflow, etc. Work on distributed/cloud systems (AWS, GCP, or Azure) with GPU-accelerated workflows. Evaluate and experiment with open-source LLMs and embeddings models (e.g., LangChain, Haystack, LlamaIndex, HuggingFace). Champion best practices in model validation, reproducibility, and responsible AI. Required Skills & Qualifications 2-8 years of experience as a Data Scientist, NLP Engineer, or ML Engineer. Strong grasp of traditional ML algorithms (SVMs, gradient boosting, etc.) and NLP fundamentals (word embeddings, topic modeling, text classification). Proven expertise in modern NLP & GenAI models, including : Transformer architectures (e.g., BERT, GPT, T5) Generative tasks : summarization, QA, chatbots, etc. Fine-tuning & prompt engineering for LLMs Experience with cloud platforms (especially AWS SageMaker, GCP, or Azure ML). Strong coding skills in Python, with libraries like Hugging Face, PyTorch, TensorFlow, Scikit-learn. Experience with open-source frameworks (LangChain, LlamaIndex, Haystack) preferred. Experience in document processing pipelines and understanding structured/unstructured insurance documents is a big plus. Familiar with MLOps tools such as MLflow, DVC, FastAPI, Docker, KubeFlow, Airflow. Familiarity with distributed computing and large-scale data processing (Spark, Hadoop, Databricks). Preferred Qualifications Experience deploying GenAI models in production environments. Contributions to open-source projects in ML/NLP/LLM space. Background in insurance, legal, or financial domain involving text-heavy workflows. Strong understanding of data privacy, ethical AI, and responsible model usage. (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

We are seeking a highly skilled AI/ML Engineer to join our team. As an AI/ML Engineer, you will be responsible for designing, implementing, and optimizing machine learning solutions, encompassing traditional models, deep learning architectures, and generative AI systems. Your role will involve collaborating with data engineers and cross-functional teams to create scalable, ethical, and high-performance AI/ML solutions that contribute to business growth. Your key responsibilities will include developing, implementing, and optimizing AI/ML models using both traditional machine learning and deep learning techniques. You will also design and deploy generative AI models for innovative business applications, in addition to working closely with data engineers to establish and maintain high-quality data pipelines and preprocessing workflows. Integrating responsible AI practices to ensure ethical, explainable, and unbiased model behavior will be a crucial aspect of your role. Furthermore, you will be expected to develop and maintain MLOps workflows to streamline training, deployment, monitoring, and continuous integration of ML models. Your expertise will be essential in optimizing large language models (LLMs) for efficient inference, memory usage, and performance. Collaboration with product managers, data scientists, and engineering teams to seamlessly integrate AI/ML into core business processes will also be part of your responsibilities. Rigorous testing, validation, and benchmarking of models to ensure accuracy, reliability, and robustness are essential aspects of this role. To be successful in this position, you must possess a strong foundation in machine learning, deep learning, and statistical modeling techniques. Hands-on experience with TensorFlow, PyTorch, scikit-learn, or similar ML frameworks is required. Proficiency in Python and ML engineering tools such as MLflow, Kubeflow, or SageMaker is also necessary. Experience in deploying generative AI solutions, understanding responsible AI concepts, solid experience with MLOps pipelines, and proficiency in optimizing transformer models or LLMs for production workloads are key qualifications for this role. Additionally, familiarity with cloud services (AWS, GCP, Azure), containerized deployments (Docker, Kubernetes), as well as excellent problem-solving and communication skills are essential. Ability to work collaboratively with cross-functional teams is also a crucial requirement. Preferred qualifications include experience with data versioning tools like DVC or LakeFS, exposure to vector databases and retrieval-augmented generation (RAG) pipelines, knowledge of prompt engineering, fine-tuning, and quantization techniques for LLMs, familiarity with Agile workflows and sprint-based delivery, and contributions to open-source AI/ML projects or published papers in conferences/journals. Join our team at Lucent Innovation, an India-based IT solutions provider, and enjoy a work environment that promotes work-life balance. With a focus on employee well-being, we offer 5-day workweeks, flexible working hours, and a range of indoor/outdoor activities, employee trips, and celebratory events throughout the year. At Lucent Innovation, we value our employees" growth and success, providing in-house training, as well as quarterly and yearly rewards and appreciation. Perks: - 5-day workweeks - Flexible working hours - No hidden policies - Friendly working environment - In-house training - Quarterly and yearly rewards & appreciation,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The Data Science and AI/ML team collaborates across the organization to identify, develop and deliver Artificial Intelligence (AI) and Machine Learning (ML) powered software solutions that improve patient outcomes, delight partners and customers, and enhance business operations in an AI First fashion. The team works on projects ranging from building sophisticated models to delivering personalized recommendations to patients, analyzing sleep data to optimize equipment and settings, proactively identifying health risk factors, and optimizing global supply chain operations. As a Sr. Machine Learning Engineer, you will play a crucial role in contributing to and leading the development of ML architecture and operations. Your responsibilities include optimizing time to market and quality of AI/ML applications by working on projects within the AI operations team. You will ensure that global AI/ML systems are production-grade, scalable, and utilize cutting-edge technology and methodology. Additionally, you will collaborate with stakeholders, mentor junior team members, and engage with business stakeholders. Key Responsibilities: - Collaborate with Product Management, Engineering, and other stakeholders to create impactful AI/ML features and products. - Work closely with Data Scientists and Data Engineers to own the end-to-end process, train junior team members, and maintain AI/ML architectures. - Document model design, experiments, tests, and outcomes, and stay informed of industry trends. - Implement process improvements, build production-level AI/ML systems, and support technical issues for stakeholders. - Participate in Code Review and handle escalated incidents to resolution. Requirements: - 4+ years of industry experience in Machine Learning Engineering, Data Science, or Data Engineering. - M.S or PhD in Data Science/Machine Learning or related areas. - Proficiency in data analytics systems development, model building, and online deployment. - Experience in building scalable AI/ML systems for various advanced problems. - Hands-on experience with large datasets, Python, and cloud-based tools. - Strong mathematical foundation and knowledge of machine learning techniques. Joining the team means more than just a job it's an opportunity to be part of a culture that values excellence, diversity, and innovation. If you are looking for a challenging and supportive environment where your ideas are encouraged, apply now to be a part of our team dedicated to making the world a healthier place. We are committed to reviewing every application we receive.,

Posted 2 weeks ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Independently design, develop, and implement machine learning and NLP models. Build and fine-tune LLM-based solutions (prompt engineering, few-shot prompting, chain of thought prompting). Develop robust, production-quality code for AI/ML applications using Python. Build, deploy, and monitor models using AWS services (SageMaker, Bedrock, Lambda, etc.). Conduct data cleaning, feature engineering, and model evaluation on large datasets. Experiment with new GenAI tools, LLM architectures, and APIs (HuggingFace, LangChain, OpenAI, etc.). Collaborate with senior data scientists for reviews but own end-to-end solutioning tasks. Document models, pipelines, experiments, and results clearly and systematically. Stay updated with the latest in AI/ML, GenAI, and cloud technologies.

Posted 2 weeks ago

Apply

0.0 - 1.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

An extraordinarily talented group of individuals work together every day to drive TNS' success, from both professional and personal perspectives. Come join the excellence! Overview Transaction Network Services (TNS), a Koch Industries company is seeking a talented and motivated data scientist to work within our AI Labs, India. As a Data Scientist, you will play a crucial role in analyzing complex datasets, building statistical and deep learning models, and implementing machine learning solutions. You will work closely with cross-functional teams to extract insights from data and contribute to data-driven decision-making processes. Responsibilities Primary Responsibilities: Utilize expertise in statistical analysis, machine learning, and deep learning techniques to solve complex international business problems. Develop, train, and deploy predictive models using machine learning frameworks and tools. Perform data preprocessing, feature engineering, and exploratory data analysis to identify patterns and trends. Collaborate with domain experts and stakeholders to understand business requirements and translate them into analytical solutions. Apply cloud engineering principles to design and deploy scalable and efficient AI solutions in the cloud environment. Collaborate with software engineers to integrate machine learning models into production systems. Implement MLOps practices to automate model training, deployment, and monitoring processes. Communicate complex findings and insights to both technical and non-technical stakeholders through clear and concise reports and visualizations. Qualifications Education/Experience: Advanced degree in computer science, machine learning, statistical methods, or related field. 0-1 years of industry experience in a data science role. Proficiency in Python programming language and experience with popular data science and machine learning frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Knowledge of MLOps practices and experience with tools such as Docker, AWS EMR, or AWS Sagemaker. Understanding of data preprocessing techniques, feature engineering, and exploratory data analysis. Demonstrated ability to work with large data sources with a focus on data privacy and security. Solid foundation in software engineering principles. Background in software development process and tools with a focus on Jira. Experienced in working in a geographically distributed team environment. Experience working for leadership located in other countries and cultures as well as large time zone shifts. Communications: Excellent communication & presentation skills. Strong teamwork, communication skills, passion, creativity, productivity & learning agility. Strong written and verbal communications skills working with internationally based colleagues. Ability to articulate and interpret analytical results from developed programs. Qualifications 0 - 1 years of experience If you are passionate about technology, love personal growth and opportunity, come see what TNS is all about! TNS is an equal opportunity employer. TNS evaluates qualified applicants without regard to race, color, religion, gender, national origin, age, sexual orientation, gender identity or expression, protected veteran status, disability/handicap status or any other legally protected characteristic.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Experience Range: 2 to 5 Years Must Have: - AWS services knowhow: s3, Athena, API Gateway, SageMaker, SES, SNS, Lambda, RDS, SQS, Glue - AWS certification: Solutions Architect Associate or Professional - Data science and Machine learning knowhow - Handson experience with Python programming language - Data analysis experience - Requirements engineering - Problem solving - Generative AI development and application experience - Dashboarding experience - Good Communication and presentation skills in English - Continuous learning attitude.

Posted 2 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title : Software Engineer - Backend (Python) About The Role Our team is responsible for building the backend components of MLOps platform on AWS. The backend components we build are the fundamental blocks for feature engineering, feature serving, model deployment and model inference in both batch and online modes. What You'll Do Here Design & build backend components of our MLOps platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills Experience with web development frameworks such as Flask, Django or FastAPI. Experience working with WSGI & ASGI web servers such as Gunicorn, Uvicorn etc. Experience with concurrent programming designs such as AsyncIO. Experience with unit and functional testing frameworks. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python. Primary Skills Python Development (Flask, Django or FastAPI) WSGI & ASGI web servers (Gunicorn, Uvicorn etc) AWS

Posted 2 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Title: Software Engineer - Backend (Python) Experience: 7+ Years Location : Hyderabad About the Role: Our team is responsible for building the backend components of the GenAI Platform. The Platform Offers Safe, compliant and cost-efficient access to LLMs, including Opensource & Commercial ones, adhering to Experian standards and policies Reusable tools, frameworks and coding patterns to perform various functions involved in either fine-tuning a LLM or developing a RAG-based application What you'll do here Design & build backend components of our GenAI platform on AWS. Collaborate with geographically distributed cross-functional teams. Participate in on-call rotation with the rest of the team to handle production incidents. What you'll need to succeed Must Have Skills At least 7+ years of professional backend web development experience with Python. Experience of AI and RAG Experience with DevOps & IaC tools such as Terraform, Jenkins etc. Experience with MLOps platorms such as AWS Sagemaker, Kubeflow or MLflow. Experience with web development frameworks such as Flask, Django or FastAPI. Experience with concurrent programming designs such as AsyncIO. Experience with any of the public cloud platforms like AWS, Azure, GCP, preferably AWS. Experience with CI/CD practices, tools, and frameworks. Nice To Have Skills Experience with Apache Kafka and developing Kafka client applications in Python. Experience with big data processing frameworks, preferably Apache Spark. Experience with containers (Docker) and container platorms like AWS ECS or AWS EKS. Experience with unit and functional testing frameworks. Experience with various Python packaging options such as Wheel, PEX or Conda. Experience with metaprogramming techniques in Python

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

WorkMode :Hybrid Work Location : Chennai / Hyderabad / Work Timing : 2 PM to 11 PM Primary : Data Scientist We are seeking a skilled Data Scientist with strong expertise in Python programming and Amazon SageMaker to join our data team. The ideal candidate will have a solid foundation in machine learning, data analysis, and cloud-based model deployment. You will work closely with cross-functional teams to build, deploy, and optimize predictive models and data-driven solutions at scale. Bachelors or Master's degree in Computer Science, Data Science, Statistics, or a related field. 12+ years of experience in data science or machine learning roles. Proficiency in Python and popular ML libraries (e.g., scikit-learn, pandas, NumPy). Hands-on experience with Amazon SageMaker for model training, tuning, and deployment. Strong understanding of supervised and unsupervised learning techniques. Experience working with large datasets and cloud platforms (AWS preferred). Excellent problem-solving and communication skills. Experience with AWS services beyond SageMaker (e.g., S3, Lambda, Step Functions). Familiarity with deep learning frameworks like TensorFlow or PyTorch. Exposure to MLOps practices and tools (e.g., CI/CD for ML, MLflow, Kubeflow). Knowledge of version control (e.g., Git) and agile development practices.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

On-site

Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

15.0 years

5 - 10 Lacs

Gurgaon

On-site

Senior Assistant Vice President EXL/SAVP/1383449 ServicesGurgaon Posted On 01 Jul 2025 End Date 15 Aug 2025 Required Experience 15 - 25 Years Basic Section Number Of Positions 1 Band D2 Band Name Senior Assistant Vice President Cost Code D014685 Campus/Non Campus NON CAMPUS Employment Type Permanent Requisition Type New Max CTC 3000000.0000 - 4000000.0000 Complexity Level Not Applicable Work Type Hybrid – Working Partly From Home And Partly From Office Organisational Group Analytics Sub Group Analytics - UK & Europe Organization Services LOB Services SBU Analytics Country India City Gurgaon Center EXL - Gurgaon Center 38 Skills Skill ARTIFICIAL INTELLIGENCE MACHINE LEARNING Minimum Qualification B.COM Certification No data available Job Description Job Summary: We are looking for a visionary Senior AVP – Generative AI Lead to spearhead our Generative AI initiatives. In this senior leadership role, you will design and execute cutting-edge AI strategies leveraging generative models to drive innovation, optimize business processes, and create new AI-driven products. You will lead a cross-functional team of AI researchers, engineers, and data scientists to develop scalable generative AI solutions that align with organizational goals. Key Responsibilities: Lead the development and deployment of generative AI models (e.g., GPT, diffusion models, transformers) across various business units. Define the AI strategy focusing on generative models to enhance product offerings, customer experience, and operational efficiency. Collaborate with business leaders and technology teams to identify high-impact use cases and translate them into AI solutions. Oversee research and experimentation to stay ahead of advancements in generative AI and related technologies. Manage, mentor, and grow a high-performing team of AI specialists, fostering innovation and technical excellence. Ensure ethical AI practices, data privacy, and compliance in all generative AI projects. Drive AI infrastructure development and integration with existing technology platforms. Present technical insights and strategic recommendations to C-suite executives and stakeholders. Build partnerships with external AI vendors, academic institutions, and industry consortia. Qualifications: Master’s or PhD in Computer Science, AI, Machine Learning, or related technical field. 10+ years of experience in AI/ML, with at least 5 years focused on generative AI technologies. Proven track record in leading AI teams and delivering large-scale generative AI projects. Deep knowledge of generative AI architectures such as transformers, GANs, VAEs, diffusion models, etc. Strong programming skills in Python, TensorFlow, PyTorch, or similar frameworks. Experience with cloud AI services (AWS Sagemaker, Azure AI, Google AI) and scalable deployment. Strong business acumen and ability to translate AI capabilities into measurable business outcomes. Excellent leadership, communication, and stakeholder management skills. Understanding of AI ethics, fairness, and regulatory considerations. Preferred Skills: Experience in NLP, computer vision, or multimodal generative models. Familiarity with large language models (LLMs) like GPT-4, PaLM, or similar. Background in product innovation or AI-driven transformation initiatives. Exposure to Agile and DevOps practices for AI/ML workflows. Workflow Workflow Type Back Office

Posted 2 weeks ago

Apply

5.0 years

10 Lacs

Calcutta

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now!

Posted 2 weeks ago

Apply

15.0 years

0 Lacs

India

On-site

Job Location: Hyderabad / Bangalore / Pune Immediate Joiners / less than 30 days About the Role We are looking for a seasoned AI/ML Solutions Architect with deep expertise in designing and deploying scalable AI/ML and GenAI solutions on cloud platforms. The ideal candidate will have a strong track record in BFSI, leading end-to-end projects—from use case discovery to productionization—while ensuring governance, compliance, and performance at scale. Key Responsibilities Lead the design and deployment of enterprise-scale AI/ML and GenAI architectures. Drive end-to-end AI/ML project delivery : discovery, prototyping, productionization. Architect solutions using leading cloud-native AI services (AWS, Azure, GCP). Implement MLOps/LLMOps pipelines for model lifecycle and automation. Guide teams in selecting and integrating GenAI/LLM frameworks (OpenAI, Cohere, Hugging Face, LangChain, etc.). Ensure robust AI governance, model risk management , and compliance practices. Collaborate with senior business stakeholders and cross-functional engineering teams. Required Skills & Experience 15+ years in AI/ML, cloud architecture, and data engineering. At least 10 end-to-end AI/ML project implementations. Hands-on expertise in one or more of the following: ML frameworks: scikit-learn, XGBoost, TensorFlow, PyTorch GenAI/LLM tools: OpenAI, Cohere, LangChain, Hugging Face, FAISS, Pinecone Cloud platforms: AWS, Azure, GCP (AI/ML services) MLOps: MLflow, SageMaker Pipelines, Kubeflow, Vertex AI Strong understanding of data privacy, model governance, and compliance frameworks in BFSI. Proven leadership of cross-functional technical teams and stakeholder engagement.

Posted 2 weeks ago

Apply

0.0 - 5.0 years

0 Lacs

Thiruvananthapuram District, Kerala

On-site

Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. Key Responsibilities: Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes. Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Requirements: 5+ years of experience in DevOps, MLOps, or infrastructure engineering roles. Hands-on experience with cloud platforms (AWS, GCP, or Azure) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker, Kubernetes, and infrastructure-as-code frameworks. Experience with ML pipelines, model versioning, and ML monitoring tools. Scripting skills in Python, Bash, or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow, MLflow, DVC, or Triton Inference Server. Exposure to data versioning, feature stores, and model registries. Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate send mail to :- thasleema@qcentro.com Job Type: Permanent Ability to commute/relocate: Thiruvananthapuram District, Kerala: Reliably commute or planning to relocate before starting work (Required) Experience: Devops and MLops: 5 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Tamil Nadu, India

On-site

Role : Sr. AI/ML Engineer Years of experience: 5+ years (with minimum 4 years of relevant experience) Work mode: WFO- Chennai (mandate) Type: FTE Notice Period: Immediate to 15 days ONLY Key skills: Python, Tensorflow, Generative AI ,Machine Learning, AWS , Agentic AI, Open AI, Claude, Fast API JD: Experience in Gen AI, CI/CD pipelines, scripting languages, and a deep understanding of version control systems(e.g. Git), containerization (e.g. Docker), and continuous integration/deployment tools (e.g. Jenkins) third party integration is a plus, cloud computing platforms (e.g. AWS, GCP, Azure), Kubernetes and Kafka. Experience building production-grade ML pipelines. Proficient in Python and frameworks like Tensorflow, Keras , or PyTorch. Experience with cloud build, deployment, and orchestration tools Experience with MLOps tools such as MLFlow, Kubeflow, Weights & Biases, AWS Sagemaker, Vertex AI, DVC, Airflow, Prefect, etc., Experience in statistical modeling, machine learning, data mining, and unstructured data analytics. Understanding of ML Lifecycle, MLOps & Hands on experience to Productionize the ML Model Detail-oriented, with the ability to work both independently and collaboratively. Ability to work successfully with multi-functional teams, principals, and architects, across organizational boundaries and geographies. Equal comfort driving low-level technical implementation and high-level architecture evolution Experience working with data engineering pipelines

Posted 2 weeks ago

Apply

0 years

0 Lacs

India

On-site

Summary As a Senior Software Engineer for FINEOS data and digital products you will be designing and implementing innovative products in AI, ML and data platform. You will collaborate with other Engineers and Architects in FINEOS to deliver data engineering capabilities to integrate AI, ML data products in core AdminSuite platform. Python, microservices and data engineering principles in a native AWS stack are the primary technical skills required to be successful in this position. Responsibilities (Other duties may be assigned.) Product engineering delivery – Translate high level design to smaller components for end-to-end solution delivery. Ability to code and review code of peers to enforce good coding practices, sound data structure choices and efficient methods. Product deployment – Well versed with AWS Devops automation to drive CICD pipelines, unit test, automated integration test, version management and promotion strategy across different environments. Product maintenance – Manage current portfolio of AI, ML data products to ensure timely update of underlying AWS components to ensure products are on current stack and marketable. Education and/or Experience Senior Python engineer with over seven years of experience in successfully developing and deploying, Python cloud-based applications and services. Demonstrated proficiency in delivering scalable applications, optimizing application performance, and ensuring robust security measures. Knowledge, Skills and Abilities Building microservices and event-based applications in serverless architecture. Storing and managing large volumes of data in objects, databases. Continuous Integration/Continuous Deployment (CI/CD) pipelines for automated testing and Deployment. Monitoring and logging tools for application performance and error tracking. Knowledge of best practices for securing AWS resources and data. Proficient in agile development practices. Experience working in large, complex Enterprise solutions with cross geography, cross time zone teams. Proficient in MS Office applications, such as Word, Excel, PowerPoint, etc. Familiar with operating systems, such as Windows, Success Factors, etc. Technical Skills Experience in frameworks and Python libraries such as Flask, Django, Pandas and NumPy. Working with NoSQL databases for high-speed, flexible data storage. Containerization for consistent deployment. Experience in operationalizing ML models in production or building GenAI applications using Textract, Sagemaker, Bedrock Language Skill s Ability to speak the English language proficiently, both verbally and in writing to collaborate with global teams. Travel Requirements This position does not require travel. Work Environment The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. Reasonable accommodations may be made to enable individuals with disabilities to perform the essential functions. Employee works primarily in a home office environment. The home office must be a well-defined work area, separate from normal domestic activity and complete with all essential technology including, but not limited to; separate phone, scanner, printer, computer, etc. as required in order to effectively perform their duties. Work Requirements Compliance with all relevant FINEOS Global policies and procedures related to Quality, Security, Safety, Business Continuity, and Environmental systems. Travel and fieldwork, including international travel may be required. Therefore, employee must possess, or be able to acquire a valid passport. Must be legally eligible to work in the country in which you are hired. FINEOS is an Equal Opportunity Employer. FINEOS does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Lexmark is now a proud part of Xerox, bringing together two trusted names and decades of expertise into a bold and shared vision. When you join us, you step into a technology ecosystem where your ideas, skills, and ambition can shape what comes next. Whether you’re just starting out or leading at the highest levels, this is a place to grow, stretch, and make real impact—across industries, countries, and careers. From engineering and product to digital services and customer experience, you’ll help connect data, devices, and people in smarter, faster ways. This is meaningful, connected work—on a global stage, with the backing of a company built for the future, and a robust benefits package designed to support your growth, well-being, and life beyond work. Responsibilities : A Senior Data Engineer with AI/ML focus combines traditional data engineering responsibilities with the technical requirements for supporting Machine Learning (ML) systems and artificial intelligence (AI) applications. This role involves not only designing and maintaining scalable data pipelines but also integrating advanced AI/ML models into the data infrastructure. The role is critical for enabling data scientists and ML engineers to efficiently train, test, and deploy models in production. This role is also responsible for designing, building, and maintaining scalable data infrastructure and systems to support advanced analytics and business intelligence. This role often involves leading data engineering projects, mentoring junior team members, and collaborating with cross-functional teams. Key Responsibilities: Data Infrastructure for AI/ML: Design and implement robust data pipelines that support data preprocessing, model training, and deployment. Ensure that the data pipeline is optimized for high-volume and high-velocity data required by ML models. Build and manage feature stores that can efficiently store, retrieve, and serve features for ML models. AI/ML Model Integration: Collaborate with ML engineers and data scientists to integrate machine learning models into production environments. Implement tools for model versioning, experimentation, and deployment (e.g., MLflow, Kubeflow, TensorFlow Extended). Support automated retraining and model monitoring pipelines to ensure models remain performant over time. Data Architecture & Design Design and maintain scalable, efficient, and secure data pipelines and architectures. Develop data models (both OLTP and OLAP). Create and maintain ETL/ELT processes. Data Pipeline Development Build automated pipelines to collect, transform, and load data from various sources (internal and external). Optimize data flow and collection for cross-functional teams. MLOps Support: Develop CI/CD pipelines to deploy models into production environments. Implement model monitoring, alerting, and logging for real-time model predictions. Data Quality & Governance Ensure high data quality, integrity, and availability. Implement data validation, monitoring, and alerting mechanisms. Support data governance initiatives and ensure compliance with data privacy laws (e.g., GDPR, HIPAA). Tooling & Infrastructure Work with cloud platforms (AWS, Azure, GCP) and data engineering tools like Apache Spark, Kafka, Airflow, etc. Use containerization (Docker, Kubernetes) and CI/CD pipelines for data engineering deployments. Team Collaboration & Mentorship Collaborate with data scientists, analysts, product managers, and other engineers. Provide technical leadership and mentor junior data engineers. Core Competencies Data Engineering: Apache Spark, Airflow, Kafka, dbt, ETL/ELT pipelines ML/AI Integration: MLflow, Feature Store, TensorFlow, PyTorch, Hugging Face GenAI: LangChain, OpenAI API, Vector DBs (FAISS, Pinecone, Weaviate) Cloud Platforms: AWS (S3, SageMaker, Glue), GCP (BigQuery, Vertex AI) Languages: Python, SQL, Scala, Bash DevOps & Infra: Docker, Kubernetes, Terraform, CI/CD pipelines Educational Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 5+ years of experience in data engineering or related field. Strong understanding of data modeling, ETL/ELT concepts, and distributed systems. Experience with big data tools and cloud platforms. Soft Skills: Strong problem-solving and critical-thinking skills. Excellent communication and collaboration abilities. Leadership experience and the ability to guide technical decisions. How to Apply ? Are you an innovator? Here is your chance to make your mark with a global technology leader. Apply now! Global Privacy Notice Lexmark is committed to appropriately protecting and managing any personal information you share with us. Click here to view Lexmark's Privacy Notice.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

20 - 30 Lacs

Mysuru, Karnataka

Remote

Job Title : Solution Architect – Application & AI Engineering Experience : 12+ years (Minimum 8 years of hands-on experience) Location : Mysuru, Karnataka Employment Type : Full-time About the Role We are seeking an experienced and forward-thinking Solution Architect with a strong background in application engineering and AI/ML systems. The ideal candidate should have deep technical expertise and hands-on experience in architecting scalable and secure solutions across web, API, database and cloud ecosystems (AWS or Azure). You will lead end-to-end architecture design efforts—transforming business requirements into robust, scalable, and secure digital products, while ensuring modern AI-driven capabilities are leveraged where applicable. Key Responsibilities Design and deliver scalable application architectures across microservices, APIs and backend databases. Collaborate with cross-functional teams to define solution blueprints combining application engineering and AI/ML requirements. Architect and lead implementation strategies for deploying applications on AWS or Azure using services such as ECS, AKS, Lambda, API Gateway, Azure App Services, Cosmos DB, etc. Guide engineering teams in application modernization, including monolith to microservices transitions, containerization and serverless. Define and enforce best practices around security, performance, and maintainability of solutions. Integrate AI/ML solutions (e.g., inference endpoints, custom LLMs, or ML Ops pipelines) within broader enterprise applications. Evaluate and recommend third-party tools, frameworks, or platforms for optimizing application performance and AI integration. Support pre-sales activities and client engagements with architectural diagrams, PoCs, and strategy sessions. Mentor engineering teams and participate in code/design reviews when necessary. Required Skills & Experience 12+ years of total experience in software/application engineering. 8+ years of hands-on experience in designing and developing distributed applications. Strong knowledge in backend technologies like Python, Node.js, or .NET; and API-first design (REST/GraphQL). Strong understanding of relational and NoSQL databases (PostgreSQL, MySQL, MongoDB, DynamoDB, etc.). Experience with DevOps practices, CI/CD pipelines, and infrastructure as code (Terraform, CloudFormation, etc.). Proven experience in architecting and deploying cloud-native applications on AWS and/or Azure. Experience with integrating AI/ML models into production systems, including data pipelines, model inference, and MLOps. Deep understanding of security, authentication (OAuth, JWT), and compliance in cloud-based applications. Familiarity with LLMs, NLP, or generative AI is a strong advantage. Preferred Qualifications Cloud certifications (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Exposure to AI/ML platforms like Azure AI Studio, Amazon Bedrock, SageMaker, or Hugging Face. Understanding of multi-tenant architecture and SaaS platforms. Experience working in Agile/DevOps teams and with tools like Jira, Confluence, GitHub/GitLab, etc. Why Join Us? Work on innovative and enterprise-scale AI-powered applications. Influence product and architecture decisions with a long-term strategic lens. Collaborate with forward-thinking and cross-disciplinary teams. Opportunity to lead from the front and shape the engineering roadmap. Job Type: Full-time Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Benefits: Flexible schedule Paid sick time Provident Fund Work from home Application Question(s): Have you led end-to-end architecture design efforts while ensuring modern AI-driven capabilities are leveraged. Do you have hands-on experience in architecting scalable and secure solutions across web, API, database, and cloud ecosystems (AWS or Azure). Experience: Software/Application Engineering: 10 years (Required) Work Location: In person

Posted 2 weeks ago

Apply

6.0 years

20 - 25 Lacs

Bengaluru, Karnataka, India

Remote

:-Job Title: Machine Learning Engineer – 2 Location: Onsite – Bengaluru, Karnataka, India Experience Required: 3 – 6 Years Compensation: ₹20 – ₹25 LPA Employment Type: Full-Time Work Mode: Onsite Only (No Remote) About the Company:- A fast-growing Y Combinator-backed SaaS startup is revolutionizing underwriting in the insurance space through AI and Generative AI. Their platform empowers insurance carriers in the U.S. to make faster, more accurate decisions by automating key processes and enhancing risk assessment. As they expand their AI capabilities, they’re seeking a Machine Learning Engineer – 2 to build scalable ML solutions using NLP, Computer Vision, and LLM technologies. Role Overview:- As a Machine Learning Engineer – 2, you'll take ownership of designing, developing, and deploying ML systems that power critical features across the platform. You'll lead end-to-end ML workflows, working with cross-functional teams to deliver real-world AI solutions that directly impact business outcomes. Key Responsibilities:- Design and develop robust AI product features aligned with user and business needs Maintain and enhance existing ML/AI systems Build and manage ML pipelines for training, deployment, monitoring, and experimentation Deploy scalable inference APIs and conduct A/B testing Optimize GPU architectures and fine-tune transformer/LLM models Build and deploy LLM applications tailored to real-world use cases Implement DevOps/ML Ops best practices with tools like Docker and Kubernetes Tech Stack & Tools Machine Learning & LLMs GPT, LLaMA, Gemini, Claude, Hugging Face Transformers PyTorch, TensorFlow, Scikit-learn LLMOps & MLOps Langchain, LangGraph, LangFlow, Langfuse MLFlow, SageMaker, LlamaIndex, AWS Bedrock, Azure AI Cloud & Infrastructure AWS, Azure Kubernetes, Docker Databases MongoDB, PostgreSQL, Pinecone, ChromaDB Languages Python, SQL, JavaScript What You’ll Do Collaborate with product, research, and engineering teams to build scalable AI solutions Implement advanced NLP and Generative AI models (e.g., RAG, Transformers) Monitor and optimize model performance and deployment pipelines Build efficient, scalable data and feature pipelines Stay updated on industry trends and contribute to internal innovation Present key insights and ML solutions to technical and business stakeholders Requirements Must-Have:- 3–6 years of experience in Machine Learning and software/data engineering Master’s degree (or equivalent) in ML, AI, or related technical fields Strong hands-on experience with Python, PyTorch/TensorFlow, and Scikit-learn Familiarity with ML Ops, model deployment, and production pipelines Experience working with LLMs and modern NLP techniques Ability to work collaboratively in a fast-paced, product-driven environment Strong problem-solving and communication skills Bonus Certifications such as: AWS Machine Learning Specialty AWS Solution Architect – Professional Azure Solutions Architect Expert Why Apply Work directly with a high-caliber founding team Help shape the future of AI in the insurance space Gain ownership and visibility in a product-focused engineering role Opportunity to innovate with state-of-the-art AI/LLM tech Be part of a fast-moving team with real market traction 📍 Note: This is an onsite-only role based in Bengaluru. Remote work is not available. Skills: computer vision,azure,llms,llms and modern nlp techniques,tensorflow,javascript,aws,ml ops,docker,python, pytorch/tensorflow, and scikit-learn,ml, ai,python,scikit-learn,software/data engineering,sql,machine learning,mlops,llm technologies,nlp,mongodb,pytorch,kubernetes,llm,postgresql

Posted 2 weeks ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location Bengaluru, Karnataka, India Job ID R-231679 Date posted 17/07/2025 Job Title: Senior MLOps Engineer Introduction to role: Are you ready to lead the charge in transforming machine learning operations? As a Senior MLOps Engineer at Alexion, you'll report directly to the IT Director of Insights and Analytics, playing a pivotal role in our IT RDU organization. Your mission? To develop and implement brand new machine learning solutions that propel our business forward. With your expertise, you'll design, build, and deploy production-ready models at scale, ensuring they meet the highest standards. Accountabilities: Lead the development and implementation of MLOps infrastructure and tools for machine learning models. Collaborate with multi-functional teams to identify, prioritize, and solve business problems using machine learning techniques. Design, develop, and implement production-grade machine learning models that meet business requirements. Oversee the training, testing, and validation of machine learning models. Ensure that machine learning models meet high-quality standards, including scalability, maintainability, and performance. Design and implement efficient development environments and processes for ML applications. Coordinate with partners and senior management to communicate updates on the progress of machine learning projects. Develop assets, accelerators, and thought capital for your practice by providing best-in-class frameworks and reusable components. Develop and maintain MLOps pipelines to automate machine learning workflows and integrate them with existing IT systems. Integrate Generative AI models-based solutions within the broader machine learning ecosystem, ensuring they adhere to ethical guidelines and serve intended business purposes. Implement robust monitoring and governance mechanisms for Generative AI models-based solutions to ensure they evolve in alignment with business needs and regulatory standards. Essential Skills/Experience: Bachelor's degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field. 4+ years of experience in developing and deploying machine learning models in production environments. Hands-on experience building production models with a focus on data science operations including serverless architectures, Kubernetes, Docker/containerization, and model upkeep and maintenance. Familiarity with API-based application architecture and API frameworks. Experience with CICD orchestration frameworks, such as GitHub Actions, Jenkins or Bitbucket pipelines. Deep understanding of software development lifecycle and maintenance. Extensive experience with one or more orchestration tools (e.g., Airflow, Flyte, Kubeflow). Experience working with MLOps tools like experiment tracking, model registry tools, and feature stores (e.g., MLFlow, Sagemaker, Azure). Strong programming skills in Python and experience with libraries such as Tensorflow, Keras, or PyTorch. Proficiency in MLOps standard methodologies, including model training, testing, deployment, and monitoring. Experience with cloud computing platforms, such as AWS, Azure or GCP. Proficient in standard processes within software engineering and agile methodologies. Strong understanding of data structures, algorithms, and machine learning techniques. Excellent communication and collaboration skills with the ability to work in a multi-functional team setting. Ability to work independently and hard-working, with strong problem-solving skills. Excellent communication and collaboration skills with the ability to partner well with business stakeholders. Desirable Skills/Experience: Experience in the pharmaceutical industry or related fields. Advanced degree in Computer Science, Electrical Engineering, Mathematics, Statistics, or a related field. Strong understanding of parallelization and asynchronous computation. Strong knowledge of data science techniques and tools, including statistical analysis, data visualization, and SQL. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca's Alexion division, you'll find yourself at the forefront of biomedical science. Our commitment to transparency and ethics drives us to push boundaries and translate complex biology into transformative medicines. With global reach and potent capabilities, we're shaping the future of rare disease treatment. Here, you'll grow in an energizing culture that values innovation and connection. Empowered by tailored development programs, you'll align your growth with our mission to make a difference for underserved patients worldwide. Ready to make an impact? Apply now to join our team! Date Posted 18-Jul-2025 Closing Date 30-Jul-2025 Alexion is proud to be an Equal Employment Opportunity and Affirmative Action employer. We are committed to fostering a culture of belonging where every single person can belong because of their uniqueness. The Company will not make decisions about employment, training, compensation, promotion, and other terms and conditions of employment based on race, color, religion, creed or lack thereof, sex, sexual orientation, age, ancestry, national origin, ethnicity, citizenship status, marital status, pregnancy, (including childbirth, breastfeeding, or related medical conditions), parental status (including adoption or surrogacy), military status, protected veteran status, disability, medical condition, gender identity or expression, genetic information, mental illness or other characteristics protected by law. Alexion provides reasonable accommodations to meet the needs of candidates and employees. To begin an interactive dialogue with Alexion regarding an accommodation, please contact accommodations@Alexion.com. Alexion participates in E-Verify.

Posted 2 weeks ago

Apply

5.0 - 12.0 years

0 Lacs

karnataka

On-site

The Data Science Lead position at our company is a key role that requires a skilled and innovative individual to join our dynamic team. We are looking for a Data Scientist with hands-on experience in Copilot Studio, M365, Power Platform, AI Foundry, and integration services. In this role, you will collaborate closely with cross-functional teams to design, build, and deploy intelligent solutions that drive business value and facilitate digital transformation. Key Responsibilities: - Develop and deploy AI models and automation solutions using Copilot Studio and AI Foundry. - Utilize M365 tools and services to integrate data-driven solutions within the organization's digital ecosystem. - Design and implement workflows and applications using the Power Platform (Power Apps, Power Automate, Power BI). - Establish, maintain, and optimize data pipelines and integration services to ensure seamless data flow across platforms. - Engage with stakeholders to comprehend business requirements and translate them into actionable data science projects. - Communicate complex analytical insights effectively to both technical and non-technical audiences. - Continuously explore and assess new tools and technologies to enhance existing processes and solutions. Required Skills and Qualifications: - 5-12 years of overall experience. - Demonstrated expertise in Copilot Studio, M365, Power Platform, and AI Foundry. - Strong understanding of data integration services and APIs. - Proficient in data modeling, data visualization, and statistical analysis. - Experience with cloud platforms like Azure or AWS is advantageous. - Exceptional problem-solving and critical-thinking abilities. - Effective communication and collaboration skills. - Bachelor's or Master's degree in Computer Science, Data Science, Engineering, or a related field. Preferred Skills: - Familiarity with additional AI and machine learning frameworks. - Knowledge of agile methodologies and collaborative tools. - Certifications in Power Platform, Azure AI, or M365 are a plus. If you are a proactive and results-driven individual with a passion for data science and a desire to contribute to cutting-edge projects, we encourage you to apply for this challenging and rewarding opportunity as a Data Science Lead with our team.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies