Jobs
Interviews

1552 Sagemaker Jobs - Page 19

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

40.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: We are seeking a Sr Machine Learning Engineer —Amgen’s most senior individual-contributor authority on building and scaling end-to-end machine-learning and generative-AI platforms. Sitting at the intersection of engineering excellence and data-science enablement, you will design the core services, infrastructure and governance controls that allow hundreds of practitioners to prototype, deploy and monitor models—classical ML, deep learning and LLMs—securely and cost-effectively. Acting as a “player-coach,” you will establish platform strategy, define technical standards, and partner with DevOps, Security, Compliance and Product teams to deliver a frictionless, enterprise-grade AI developer experience. Roles & Responsibilities: Engineer end-to-end ML pipelines—data ingestion, feature engineering, training, hyper-parameter optimisation, evaluation, registration and automated promotion—using Kubeflow, SageMaker Pipelines, Open AI SDK or equivalent MLOps stacks. Harden research code into production-grade micro-services, packaging models in Docker/Kubernetes and exposing secure REST, gRPC or event-driven APIs for consumption by downstream applications. Build and maintain full-stack AI applications by integrating model services with lightweight UI components, workflow engines or business-logic layers so insights reach users with sub-second latency. Optimise performance and cost at scale—selecting appropriate algorithms (gradient-boosted trees, transformers, time-series models, classical statistics), applying quantisation/pruning, and tuning GPU/CPU auto-scaling policies to meet strict SLA targets. Instrument comprehensive observability—real-time metrics, distributed tracing, drift & bias detection and user-behaviour analytics—enabling rapid diagnosis and continuous improvement of live models and applications. Embed security and responsible-AI controls (data encryption, access policies, lineage tracking, explainability and bias monitoring) in partnership with Security, Privacy and Compliance teams. Contribute reusable platform components—feature stores, model registries, experiment-tracking libraries—and evangelise best practices that raise engineering velocity across squads. Perform exploratory data analysis and feature ideation on complex, high-dimensional datasets to inform algorithm selection and ensure model robustness. Partner with data scientists to prototype and benchmark new algorithms, offering guidance on scalability trade-offs and production-readiness while co-owning model-performance KPIs. Must-Have Skills: 3-5 years in AI/ML and enterprise software. Comprehensive command of machine-learning algorithms—regression, tree-based ensembles, clustering, dimensionality reduction, time-series models, deep-learning architectures (CNNs, RNNs, transformers) and modern LLM/RAG techniques—with the judgment to choose, tune and operationalise the right method for a given business problem. Proven track record selecting and integrating AI SaaS/PaaS offerings and building custom ML services at scale. Expert knowledge of GenAI tooling: vector databases, RAG pipelines, prompt-engineering DSLs and agent frameworks (e.g., LangChain, Semantic Kernel). Proficiency in Python and Java; containerisation (Docker/K8s); cloud (AWS, Azure or GCP) and modern DevOps/MLOps (GitHub Actions, Bedrock/SageMaker Pipelines). Strong business-case skills—able to model TCO vs. NPV and present trade-offs to executives. Exceptional stakeholder management; can translate complex technical concepts into concise, outcome-oriented narratives. Good-to-Have Skills: Experience in Biotechnology or pharma industry is a big plus Published thought-leadership or conference talks on enterprise GenAI adoption. Master’s degree in Computer Science and or Data Science Familiarity with Agile methodologies and Scaled Agile Framework (SAFe) for project delivery. Education and Professional Certifications Master’s degree with 6-11 + years of experience in Computer Science, IT or related field OR Bachelor’s degree with 8-13 + years of experience in Computer Science, IT or related field Certifications on GenAI/ML platforms (AWS AI, Azure AI Engineer, Google Cloud ML, etc.) are a plus. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Ability to learn quickly, be organized and detail oriented. Strong presentation and public speaking skills. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request an accommodation.

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description AI Quality Assurance Lead (Evaluation & Testing) This role requires working from our local Hyderabad office 2-3x a week. Location: Hyderabad, Telangana, India About The Team The Generative AI Quality & Safety team owns ABC Fitness’s evaluation frameworks, testing pipelines, and compliance tooling for AI-driven features. We partner with product, engineering, and legal teams to ensure every LLM interaction meets rigorous standards for accuracy, safety, and performance. As our AI Quality Assurance Lead, you’ll architect hybrid (automated + human) testing systems, define GenAI quality KPIs, and embed Responsible AI principles across our fitness-tech platform. At ABC Fitness, we love entrepreneurs because we are entrepreneurs. We know how much grit it takes to start your own business and grow it into something that lasts. We roll our sleeves up, we act fast, and we learn together. What You’ll Do Design and deploy evaluation pipelines for generative AI systems using tools like OpenAI Evals, Promptfoo, and custom test harnesses. Develop hallucination detection workflows and bias-analysis frameworks for LLM outputs across 10+ languages. Partner with AI researchers to translate model capabilities into testable requirements for product teams. Implement CI/CD-integrated regression testing for AI microservices on AWS/Azure, monitoring model drift and performance degradation. Lead bug triage sessions, prioritizing issues impacting user trust, legal compliance, or revenue. Document QA protocols, failure modes, and root-cause analyses in our internal knowledge base. What You’ll Need 7+ years in QA/testing roles, with 3+ years focused on AI/ML systems (LLMs, recommendation engines, or conversational AI). Hands-on experience with GenAI evaluation tools (LangSmith, Weights & Biases) and statistical analysis (Python, SQL). Proficiency in cloud platforms (AWS SageMaker, Azure ML) and containerized testing environments (Docker, Kubernetes). Deep understanding of Responsible AI principles—fairness, transparency, privacy—and adversarial testing methodologies. Ability to mentor junior engineers and communicate technical risks to non-technical stakeholders. Certifications like AWS Certified Machine Learning Specialty or Microsoft AI Engineer are a plus. WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Posted 3 weeks ago

Apply

2.0 years

2 - 6 Lacs

Gurgaon

Remote

About ProCogia: We're a diverse, close-knit team with a common pursuit of providing top-class, end-to-end data solutions for our clients. In return for your talent and expertise, you will be rewarded with a competitive salary, generous benefits, alongwith ample opportunity for personal development. 'Growth mindset' is something we seek in all our new hires and has helped drive much of our recent growth across North America. Our distinct approach is to push the limits and value derived from data. Working within ProCogia's thriving environment will allow you to unleash your full career potential. The core of our culture is maintaining a high level of cultural equality throughout the company. Our diversity and differences allow us to create innovative and effective data solutions for our clients. Our Core Values: Trust, Growth, Innovation, Excellence, and Ownership Location: India (Remote) Time Zone: 12pm to 9pm IST Job Description: We are seeking a Senior MLOps Engineer with deep expertise in AWS CDK, MLOps, and Data Engineering tools to join a high-impact team focused on building reusable, scalable deployment pipelines for Amazon SageMaker workloads. This role combines hands-on engineering, automation, and infrastructure expertise with strong stakeholder engagement skills. You will work closely with Data Scientists, ML Engineers, and platform teams to accelerate ML productization using best-in-class DevOps practices. Key Responsibilities: Design, implement, and maintain reusable CI/CD pipelines for SageMaker-based ML workflows. Develop Infrastructure as Code using AWS CDK for scalable and secure cloud deployments. Build and manage integrations with AWS Lambda, Glue, Step Functions, and OpenTable formats (Apache Iceberg, Parquet, etc.). Support MLOps lifecycle: model packaging, deployment, versioning, monitoring, and rollback strategies. Use GitLab to manage repositories, pipelines, and infrastructure automation. Enable logging, monitoring, and cost-effective scaling of SageMaker instances and jobs. Collaborate closely with stakeholders across Data Science, Cloud Platform, and Product teams to gather requirements, communicate progress, and iterate on infrastructure designs. Ensure operational excellence through well-tested, reliable, and observable deployments. Required Skills: 2+ years of experience in MLOps, with 4+ years of experience in DevOps or Cloud Engineering, ideally with a focus on machine learning workloads. Hands-on experience with GitLab CI Pipelines, artifact scanning, vulnerability checks, and API management. Experience in Continuous Development, Continuous Integration (CI/CD), and Test-Driven Development (TDD). Experience in building microservices and API architectures using FastAPI, GraphQL, and Pydantic. Proficiency in Python v3.6 or higher and experience with Python frameworks such as Pytest. Strong experience with AWS CDK (TypeScript or Python) for IaC. Hands-on experience with Amazon SageMaker, including pipeline creation and model deployment. Solid command over AWS Lambda, AWS Glue, OpenTable formats (like Iceberg/Parquet), and event-driven architectures. Practical knowledge of MLOps best practices: reproducibility, metadata management, model drift, etc. Experience deploying production-grade data and ML systems. Comfortable working in a consulting/client-facing environment, with strong stakeholder management and communication skills Preferred Qualifications: Experience with feature stores, ML model registries, or custom SageMaker containers. Familiarity with data lineage, cost optimization, and cloud security best practices. Background in ML frameworks (TensorFlow, PyTorch, etc.). Education: Bachelor's or master's degree in any of the following: statistics, data science, computer science, or another mathematically intensive field. ProCogia is proud to be an equal-opportunity employer. We are committed to creating a diverse and inclusive workspace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

Posted 3 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

New Delhi, Delhi, India

On-site

Company Description Aestriks is a full-service software development company headquartered in Delhi NCR, India. We specialize in building scalable, reliable web, mobile, and backend systems for startups, enterprises, and side hustlers. Role Description Experience - 1-5 years AI/ML engineering (even freshers can apply) This is a full-time on-site role for an Artificial Intelligence Engineer, located in New Delhi. The AI Engineer will be responsible for designing, developing, and implementing AI-based solutions. Core Technologies: Python, SQL PyTorch or TensorFlow LangChain or LlamaIndex Hugging Face ecosystem (Transformers, PEFT, Datasets) Vector databases (Pinecone, Qdrant, Chroma) Cloud platforms (AWS Bedrock/SageMaker, GCP Vertex AI, or Azure OpenAI) LLM/GenAI Stack: Fine-tuning techniques (LoRA, QLoRA) RAG implementation and optimization LLM APIs (OpenAI, Anthropic, Google Gemini) Embedding models and similarity search Evaluation frameworks (DeepEval, LLM-as-a-Judge) MLOps tools (MLflow, Weights & Biases) Qualifications Proficiency in Pattern Recognition and Neural Networks Strong background in Computer Science and Software Development Experience with Natural Language Processing (NLP) technologies Excellent problem-solving and analytical skills Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Understanding of data structures and algorithms Ability to work collaboratively in a team environment Experience in the software development lifecycle is a plus

Posted 3 weeks ago

Apply

8.0 - 20.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Dear Aspirant , Greetings from TCS ! TCS presents excellent opportunity for Data Science & AI/ML Architect (Traditional AI & Generative AI) Exp: 8- 20 Years Job Location: Chennai / Bangalore / Hyderabad / Mumbai / Pune / Kolkata / Delhi / Noida / Gurgaon ●Develop scalable AI/ML solutions that integrate seamlessly with existing systems and align with business objectives ●Experience in defining and designing robust AI/ML architectures on cloud platforms such as Azure, AWS, or Google Cloud. ●Handson experience on implementing solutions using RAG, Agentic AI, Langchain, MLOps ●Experience in implementing ethical AI practices and ensuring responsible AI usage in solutions. ●Proficient in using tools like TensorFlow, PyTorch, Hugging Face Transformers, OpenAI GPT, Stable Diffusion, DALL-E, and AWS SageMaker, Azure ML, Azure Databricks to develop and deploy generative AI models across cloud environments. ●Experience in some of the industry renowned tools for AI/ML workload implementation like Dataiku, Datarobot, Rapidminer etc. ●Exposure to complex AI/ML solutions with computer vison/NLP etc. ●Collaborates with Infrastructure and Security Architects to ensure alignment with Enterprise standards and designs ●Strong oral and written Communication skills. Good presentation skills ●Analytical Skills Business orientation & acumen (exposure)

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position - Senior AI Engineer and Data Scientist – Palantir Platform Location - Gurgaon/Chennai/Hyderabad/Pune/Kolkata/Mumbai/Bangalore Experience - 7 + Years ABOUT HASHEDIN We are software engineers who solve business problems with a Product Mindset for leading global organizations. By combining engineering talent with business insight, we build software and products that can create new enterprise value. The secret to our success is a fast-paced learning environment, an extreme ownership spirit, and a fun culture. WHY SHOULD YOU JOIN US? With the agility of a start-up and the opportunities of an enterprise, every day at HashedIn, your work will make an impact that matters. So, if you are a problem solver looking to thrive in a dynamic fun culture of inclusion, collaboration, and high performance – HashedIn is the place to be! From learning to leadership, this is your chance to take your software engineering career to the next level. JOB TITLE - Senior AI Engineer and Data Scientist – Palantir Platform About the Role We are seeking a highly skilled Senior Data Scientist & AI Engineer to architect, develop, and deploy advanced analytics and AI/ML solutions on the Palantir platform (Foundry, AIP) and/or leading cloud platforms (AWS, Azure, GCP). You will drive end-to-end data science and AI engineering initiatives, leveraging both traditional and cutting-edge technologies to deliver impactful business outcomes. Key Responsibilities Lead end-to-end solution development for data science and AI/ML projects on Palantir Foundry/AIP or major cloud platforms (AWS, Azure, GCP). Own the full data science lifecycle: Data ingestion, cleansing, transformation, and integration from diverse sources Feature engineering, selection, and ontology/schema design Exploratory data analysis (EDA) and visualization Model development, training, tuning, validation, and deployment Production monitoring, drift detection, and model retraining Design and implement AI engineering solutions using Palantir AIP, including: Building and optimizing AIP Logic Functions Leveraging LLMs for translation, classification, and document/image parsing Implementing semantic search, schema matching, and data validation Developing and deploying computer vision models for media analysis Creating feedback loops and cross-validation workflows to enhance model performance Collaborate with cross-functional teams to translate business requirements into robust technical solutions. Mentor and guide junior team members in data science and AI engineering best practices. Stay current with emerging trends in AI/ML, cloud, and Palantir technologies. Required Skills & Experience Education: Bachelor’s or Master’s in Computer Science, Data Science, Engineering, Mathematics, or related field. Experience: 7–12 years in data science, machine learning, or AI engineering roles. Palantir Platform: 1–2 years hands-on experience with Palantir Foundry and/or AIP (preferred), or strong willingness to learn. Cloud Platforms: Proven experience with AWS, Azure, or GCP AI/ML services (e.g., SageMaker, Azure ML, Vertex AI). Core Data Science Competencies: Data ingestion, transformation, and integration Feature engineering and selection Ontology/schema creation and management EDA and data visualization Model training, evaluation, hyperparameter tuning, and deployment AI Engineering Competencies: Object relations and data modeling Building and deploying AI logic functions (AIP Logic) Working with LLMs (translation, classification, document parsing) Computer vision model development and image clustering Schema matching, semantic search, and data validation Feedback loop implementation and model retraining Programming: Proficiency in Python, SQL, and at least one additional language (e.g., Java, Scala, R). ML/AI Frameworks: Experience with Kubeflow, TensorFlow, PyTorch, scikit-learn, or similar. Visualization: Familiarity with matplotlib, seaborn, Power BI, Tableau, or Palantir visualization modules (Contour, Quiver). DevOps/MLOps: Experience with CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes) is a plus. Soft Skills: Strong analytical, problem-solving, and communication skills. Ability to work independently and collaboratively. Preferred Skills Experience with advanced LLMs (e.g., GPT-4, GPT-4o, Gemini, Claude, etc.) and production deployment. Familiarity with data governance, security, and compliance best practices (e.g., RBAC, audit logs, privacy). Certifications (Nice to Have) Palantir Foundry or AIP certifications AWS, Azure, or GCP ML certifications

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Senior Artificial Intelligence Developer Location: Pune Experience: 3–8 Years Company: Asmadiya Technologies Pvt. Ltd. About the Role Asmadiya Technologies is seeking a Senior AI Developer to lead the design and deployment of advanced AI solutions across enterprise-grade applications. You will architect intelligent systems, mentor junior engineers, and drive innovation in the areas of machine learning, deep learning, computer vision, and large language models. If you're ready to turn AI research into impactful production systems, we want to work with you. Key Responsibilities Lead end-to-end design, development, and deployment of scalable AI/ML solutions in production environments. Architect AI pipelines and integrate models with enterprise systems and APIs. Collaborate cross-functionally with product managers, data engineers, and software teams to align AI initiatives with business goals. Optimize models for performance, scalability, and interpretability using MLOps practices. Conduct deep research and experimentation with the latest AI techniques (e.g., Transformers, Reinforcement Learning, GenAI). Review code, mentor team members, and set technical direction for AI projects. Own model governance, ethical AI considerations, and post-deployment monitoring. Required Skills & Qualifications Bachelor’s/Master’s in Computer Science, Artificial Intelligence, Data Science, or a related field. 3–8 years of hands-on experience in AI/ML, including production model deployment. Advanced Python skills and deep expertise in libraries such as TensorFlow, PyTorch, Hugging Face, and Scikit-learn. Proven experience in deploying models to production (REST APIs, containers, cloud ML services). Deep understanding of ML algorithms, optimization, statistical modeling, and deep learning. Familiarity with tools like MLflow, Docker, Kubernetes, Airflow, and CI/CD pipelines for ML. Experience with cloud AI/ML services (AWS SageMaker, GCP Vertex AI, or Azure ML). Preferred Skills Hands-on with LLMs and GenAI tools (OpenAI, LangChain, RAG architecture, vector DBs). Experience in NLP, computer vision, or recommendation systems at scale. Knowledge of model explainability (SHAP, LIME), bias detection, and AI ethics. Strong understanding of software engineering best practices, microservices, and API architecture. What We Offer ✅ Leadership role in cutting-edge AI product development ✅ Influence on AI strategy and technical roadmap ✅ Exposure to enterprise and global AI projects ✅ Fast-paced, growth-focused work environment ✅ Flexible work hours, supportive leadership, and a collaborative team Apply now by sending your resume to: careers@asmadiya.com

Posted 3 weeks ago

Apply

2.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

About ProCogia: We're a diverse, close-knit team with a common pursuit of providing top-class, end-to-end data solutions for our clients. In return for your talent and expertise, you will be rewarded with a competitive salary, generous benefits, alongwith ample opportunity for personal development. 'Growth mindset' is something we seek in all our new hires and has helped drive much of our recent growth across North America. Our distinct approach is to push the limits and value derived from data. Working within ProCogia's thriving environment will allow you to unleash your full career potential. The core of our culture is maintaining a high level of cultural equality throughout the company. Our diversity and differences allow us to create innovative and effective data solutions for our clients. Our Core Values: Trust, Growth, Innovation, Excellence, and Ownership Location: India (Remote) Time Zone: 12pm to 9pm IST Job Description: We are seeking a Senior MLOps Engineer with deep expertise in AWS CDK, MLOps, and Data Engineering tools to join a high-impact team focused on building reusable, scalable deployment pipelines for Amazon SageMaker workloads. This role combines hands-on engineering, automation, and infrastructure expertise with strong stakeholder engagement skills. You will work closely with Data Scientists, ML Engineers, and platform teams to accelerate ML productization using best-in-class DevOps practices. Key Responsibilities: Design, implement, and maintain reusable CI/CD pipelines for SageMaker-based ML workflows. Develop Infrastructure as Code using AWS CDK for scalable and secure cloud deployments. Build and manage integrations with AWS Lambda, Glue, Step Functions, and OpenTable formats (Apache Iceberg, Parquet, etc.). Support MLOps lifecycle: model packaging, deployment, versioning, monitoring, and rollback strategies. Use GitLab to manage repositories, pipelines, and infrastructure automation. Enable logging, monitoring, and cost-effective scaling of SageMaker instances and jobs. Collaborate closely with stakeholders across Data Science, Cloud Platform, and Product teams to gather requirements, communicate progress, and iterate on infrastructure designs. Ensure operational excellence through well-tested, reliable, and observable deployments. Required Skills: 2+ years of experience in MLOps, with 4+ years of experience in DevOps or Cloud Engineering, ideally with a focus on machine learning workloads. Hands-on experience with GitLab CI Pipelines, artifact scanning, vulnerability checks, and API management. Experience in Continuous Development, Continuous Integration (CI/CD), and Test-Driven Development (TDD). Experience in building microservices and API architectures using FastAPI, GraphQL, and Pydantic. Proficiency in Python v3.6 or higher and experience with Python frameworks such as Pytest. Strong experience with AWS CDK (TypeScript or Python) for IaC. Hands-on experience with Amazon SageMaker, including pipeline creation and model deployment. Solid command over AWS Lambda, AWS Glue, OpenTable formats (like Iceberg/Parquet), and event-driven architectures. Practical knowledge of MLOps best practices: reproducibility, metadata management, model drift, etc. Experience deploying production-grade data and ML systems. Comfortable working in a consulting/client-facing environment, with strong stakeholder management and communication skills Preferred Qualifications: Experience with feature stores, ML model registries, or custom SageMaker containers. Familiarity with data lineage, cost optimization, and cloud security best practices. Background in ML frameworks (TensorFlow, PyTorch, etc.). Education: Bachelor's or master's degree in any of the following: statistics, data science, computer science, or another mathematically intensive field. ProCogia is proud to be an equal-opportunity employer. We are committed to creating a diverse and inclusive workspace. All qualified applicants will receive consideration for employment without regard to race, national origin, gender, gender identity, sexual orientation, protected veteran status, disability, age, or other legally protected status.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

0 Lacs

Andhra Pradesh, India

On-site

At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in data science and machine learning engineering at PwC will focus on leveraging advanced analytics and machine learning techniques to extract insights from large datasets and drive data-driven decision making. You will work on developing predictive models, conducting statistical analysis, and creating data visualisations to solve complex business problems. Focused on relationships, you are building meaningful client connections, and learning how to manage and inspire others. Navigating increasingly complex situations, you are growing your personal brand, deepening technical expertise and awareness of your strengths. You are expected to anticipate the needs of your teams and clients, and to deliver quality. Embracing increased ambiguity, you are comfortable when the path forward isn’t clear, you ask questions, and you use these moments as opportunities to grow. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Respond effectively to the diverse perspectives, needs, and feelings of others. Use a broad range of tools, methodologies and techniques to generate new ideas and solve problems. Use critical thinking to break down complex concepts. Understand the broader objectives of your project or role and how your work fits into the overall strategy. Develop a deeper understanding of the business context and how it is changing. Use reflection to develop self awareness, enhance strengths and address development areas. Interpret data to inform insights and recommendations. Uphold and reinforce professional and technical standards (e.g. refer to specific PwC tax and audit guidance), the Firm's code of conduct, and independence requirements. Role Overview We are seeking a Senior Associate – AI Engineer / MLOps / LLMOps with a passion for building resilient, cloud-native AI systems. In this role, you’ll collaborate with data scientists, researchers, and product teams to build infrastructure, automate pipelines, and deploy models that power intelligent applications at scale. If you enjoy solving real-world engineering challenges at the convergence of AI and software systems, this role is for you. Key Responsibilities Architect and implement AI/ML/GenAI pipelines, automating end-to-end workflows from data ingestion to model deployment and monitoring. Develop scalable, production-grade APIs and services using FastAPI, Flask, or similar frameworks for AI/LLM model inference. Design and maintain containerized AI applications using Docker and Kubernetes. Operationalize Large Language Models (LLMs) and other GenAI models via cloud-native deployment (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Manage and monitor model performance post-deployment, applying concepts of MLOps and LLMOps including model versioning, A/B testing, and drift detection. Build and maintain CI/CD pipelines for rapid and secure deployment of AI solutions using tools such as GitHub Actions, Azure DevOps, GitLab CI. Implement security, governance, and compliance standards in AI pipelines. Optimize model serving infrastructure for speed, scalability, and cost-efficiency. Collaborate with AI researchers to translate prototypes into robust production-ready solutions. Required Skills & Experience 4 to 9 years of hands-on experience in AI/ML engineering, MLOps, or DevOps for data science products. Bachelor's degree in Computer Science, Engineering, or related technical field (BE/BTech/MCA). Strong software engineering foundation with hands-on experience in Python, Shell scripting, and familiarity with ML libraries (scikit-learn, transformers, etc.). Experience deploying and maintaining LLM-based applications, including prompt orchestration, fine-tuned models, and agentic workflows. Deep understanding of containerization and orchestration (Docker, Kubernetes, Helm). Experience with CI/CD pipelines, infrastructure-as-code tools (Terraform, CloudFormation), and automated deployment practices. Proficiency in cloud platforms: Azure (preferred), AWS, or GCP – including AI/ML services (e.g., Azure ML, AWS Sagemaker, GCP Vertex AI). Experience managing and monitoring ML lifecycle (training, validation, deployment, feedback loops). Solid understanding of APIs, microservices, and event-driven architecture. Experience with model monitoring/orchestration tools (e.g, Kubeflow, MLflow). Exposure to LLMOps-specific orchestration tools such as LangChain, LangGraph, Haystack, or PromptLayer. Experience with serverless deployments (AWS Lambda, Azure Functions) and GPU-enabled compute instances. Knowledge of data pipelines using tools like Apache Airflow, Prefect, or Azure Data Factory. Exposure to logging and observability tools like ELK stack, Azure Monitor, or Datadog. Good to Have Experience implementing multi-model architecture, serving GenAI models alongside traditional ML models. Knowledge of data versioning tools like DVC, Delta Lake, or LakeFS. Familiarity with distributed systems and optimizing inference pipelines for throughput and latency. Experience with infrastructure cost monitoring and optimization strategies for large-scale AI workloads. It would be great if the candidate has exposure to full-stack ML/DL. Soft Skills & Team Expectations Strong communication and documentation skills; ability to clearly articulate technical concepts to both technical and non-technical audiences. Demonstrated ability to work independently as well as collaboratively in a fast-paced environment. A builder's mindset with a strong desire to innovate, automate, and scale. Comfortable in an agile, iterative development environment. Willingness to mentor junior engineers and contribute to team knowledge growth. Proactive in identifying tech stack improvements, security enhancements, and performance bottlenecks.

Posted 3 weeks ago

Apply

1.0 - 2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

JR0124664 Junior Associate, Solution Engineering (Data Science) – Pune, India Are you excited by the opportunity of using your knowledge of development to lead a team to success? Are you interested in joining a globally diverse organization where our unique contributions are recognized and celebrated, allowing each of us to thrive? Then it’s time to join Western Union as a Junior Associate, Solution Engineering. Western Union powers your pursuit You will be working with a team of professionals with a broad range of responsibilities including all aspects of software engineering like requirements, understanding and validation, solution design detailed design, development, testing and software configuration management. Build products, systems, and services that are optimized, well organized and maintainable, and have a high impact on our end users. Role Responsibilities Applying Data Science methods in solving business use cases preferably in Banking and Financial Services, Payments and Fintech domain. Demonstrate strong capabilities in assessing business needs while providing creative and effective solutions in conformance to emerging technology standards. Partners with key business stakeholders through the projects and ensure smooth knowledge transfer for better utilization of the end work product in business decisioning. Works with cross-functional teams to develop and implement the AI/ML solutions for frequent usage in decisioning. Design, Build, Deploy, Measure Performance of AI/ML Solutions that align with the stakeholder expectations using the appropriate and most effective ML methods. To utilize expertise in Analytics, Machine Learning and AI to build solutions that leverage relevant data sources. Strong emphasis on customer journey, product quality, performance tuning, troubleshooting, and continuous development. Breaks down the problem into its constituent parts; evaluates the available solution options while solving problems. Can prioritize individual tasks based on project criticalities, proactively plans based on critical inputs from historical data, analyses work output and plans for contingencies. Role Requirements 1-2 years of extensive experience in Machine Learning (supervised and unsupervised both), Classification, Data/Text Mining, NLP, Decision Trees, Random Forest, Model Explain ability, Bias Detection, ML model deployment. Hands on expertise in building and deploying machine learning models using tools like Python, AWS Sagemaker, Dataiku. Familiarity with building data pipelines in ETL tools like Matillion/Talend to support the AWS model deployments would be a plus. Extra points for having experience in building GenAI solutions for business use cases using AWS services like Bedrock with a good understanding of LLM models commonly used today. Team player with strong analytical, verbal, and written communication skills. Being Curious, Inquisitive and Creative is what will help you excel in the role. Ability to work in a fast paced, iterative development environment and adapt to changing business priorities and to thrive under pressure We make financial services accessible to humans everywhere. Join us for what’s next. Western Union is positioned to become the world’s most accessible financial services company —transforming lives and communities. We’re a diverse and passionate customer-centric team of over 8,000 employees serving 200 countries and territories, reaching customers and receivers around the globe. More than moving money, we design easy-to-use products and services for our digital and physical financial ecosystem that help our customers move forward. Just as we help our global customers prosper, we support our employees in achieving their professional aspirations. You’ll have plenty of opportunities to learn new skills and build a career, as well as receive a great compensation package. If you’re ready to help drive the future of financial services, it’s time for Western Union. Learn more about our purpose and people at https://careers.westernunion.com/. Benefits You will also have access to short-term incentives, multiple health insurance options, accident and life insurance, and access to best-in-class development platforms, to name a few(https://careers.westernunion.com/global-benefits/). Please see the location-specific benefits below and note that your Recruiter may share additional role-specific benefits during your interview process or in an offer of employment. Your India Specific Benefits Include Employees Provident Fund [EPF] Gratuity Payment Public holidays Annual Leave, Sick leave, Compensatory leave, and Maternity / Paternity leave Annual Health Check-up Hospitalization Insurance Coverage (Mediclaim) Group Life Insurance, Group Personal Accident Insurance Coverage, Business Travel Insurance Cab Facility Relocation Benefit Western Union values in-person collaboration, learning, and ideation whenever possible. We believe this creates value through common ways of working and supports the execution of enterprise objectives which will ultimately help us achieve our strategic goals. By connecting face-to-face, we are better able to learn from our peers, solve problems together, and innovate. Our Hybrid Work Model categorizes each role into one of three categories. Western Union has determined the category of this role to be Hybrid. This is defined as a flexible working arrangement that enables employees to divide their time between working from home and working from an office location. The expectation for Hybrid roles in the Philippines is to work from the office at least 70% of the employee’s working days per month. We are passionate about diversity. Our commitment is to provide an inclusive culture that celebrates the unique backgrounds and perspectives of our global teams while reflecting the communities we serve. We do not discriminate based on race, color, national origin, religion, political affiliation, sex (including pregnancy), sexual orientation, gender identity, age, disability, marital status, or veteran status. The company will provide accommodation for applicants, including those with disabilities, during the recruitment process, following applicable laws. Estimated Job Posting End Date 07-13-2025 This application window is a good-faith estimate of the time that this posting will remain open. This posting will be promptly updated if the deadline is extended or the role is filled.

Posted 3 weeks ago

Apply

1.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Specialist Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: We are looking for an **AWS Cloud Engineer** with DevOps expertise and hands-on experience supporting AI/ML workloads. You will design, automate, and manage AWS infrastructure, implement DevOps best practices, and enable scalable AI/ML solutions. Responsibilities: · Design, deploy, and manage AWS cloud infrastructure. · Build and maintain CI/CD pipelines (Jenkins, CodePipeline, etc.). · Use Infrastructure as Code (Terraform, CloudFormation). · Containerize and orchestrate applications (Docker, Kubernetes/EKS). · Support and optimize AI/ML workloads (SageMaker, Bedrock). · Collaborate with data scientists on MLOps and model deployment. · Ensure security, monitoring, and cost optimization. Mandatory skill sets: AWS services (EC2, S3, Lambda, SageMaker, EKS , Fargate , API gateway) - CI/CD, Jenkins , Git, automation scripting (Python/Bash) - IaC: Terraform, CloudFormation - Containers: Docker, Kubernetes/EKS - AI/ML: SageMaker, MLOps tools - Security: IAM, encryption - Monitoring: CloudWatch, Prometheus Preferred skill sets: AWS certifications, experience with AI/ML pipelines, and strong communication skills. Years of experience required: 1-4 Years of experience Education qualification: Graduate Engineer or Management Graduate Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master Degree, Bachelor Degree Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills AWS Devops Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Documentation Development, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Optimism, Performance Assessment, Performance Management Software, Problem Solving, Product Management, Product Operations, Project Delivery {+ 11 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 3 weeks ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: OpZen is an innovative early-stage startup founded by a team of visionary entrepreneurs with a stellar track record of building successful ventures such as Mitchell Madison, Opera Solutions, and Zenon. Our mission is to revolutionize the finance industry through the creation of groundbreaking AI-driven products and the provision of elite consulting services. We are committed to harnessing the power of advanced technology to deliver transformative solutions that drive unparalleled efficiency, foster innovation, and spur growth for our clients. Join us on our exciting journey to redefine the future of finance and leave an indelible mark on the industry. Role: Lead/Manager Overview: Overview: We are seeking a visionary and dynamic individual to lead our AI initiatives and data-driven strategies. This role is crucial in shaping the future of our company by leveraging advanced technologies to drive innovation and growth. The ideal candidate will possess a deep understanding of AI, machine learning, and data analytics, along with a proven track record in leadership and strategic execution. Key Responsibilities: Self-Driven Initiative: Take ownership of projects and drive them to successful completion with minimal supervision, demonstrating a proactive and entrepreneurial mindset. Stakeholder Communication: Present insights, findings, and strategic recommendations to senior management and key stakeholders, fostering a data-driven decision-making culture. Executive Collaboration: Report directly to the founders and collaborate with other senior leaders to shape the company's direction and achieve our ambitious goals. Innovation & Problem-Solving: Foster a culture of innovative thinking and creative problem-solving to tackle complex challenges and drive continuous improvement. AI Research & Development: Oversee AI research and development initiatives, ensuring the integration of cutting-edge technologies and methodologies. Data Management: Ensure effective data collection, management, and analysis to support AI-driven decision-making and product development. Required Skills and Qualifications: Bachelor's degree from a Tier 1 institution or an MBA from a recognized institution. Proven experience in a managerial role, preferably in a startup environment. Strong leadership and team management skills. Excellent strategic thinking and problem-solving abilities. Exceptional communication and interpersonal skills. Ability to thrive in a fast-paced, dynamic environment. Entrepreneurial mindset with a passion for innovation and growth. Extensive experience with AI technologies, machine learning, and data analytics. Proficiency in programming languages such as Python, R, or similar. Familiarity with data visualization tools like Tableau, Power BI, or similar. Strong understanding of data governance, privacy, and security best practices. Technical Skills: Machine Learning Frameworks: Expertise in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Data Processing: Proficiency in using tools like Apache Kafka, Apache Flink, or Apache Beam for real-time data processing. Database Management: Experience with SQL and NoSQL databases, including MySQL, PostgreSQL, MongoDB, or Cassandra. Big Data Technologies: Hands-on experience with Hadoop, Spark, Hive, or similar big data technologies. Cloud Computing: Strong knowledge of cloud services and infrastructure, including AWS (S3, EC2, SageMaker), Google Cloud (BigQuery, Dataflow), or Azure (Data Lake, Machine Learning). DevOps and MLOps: Familiarity with CI/CD pipelines, containerization (Docker, Kubernetes), and orchestration tools for deploying and managing machine learning models. Data Visualization: Advanced skills in data visualization tools such as Tableau, Power BI, or D3.js to create insightful and interactive dashboards. Natural Language Processing (NLP): Experience with NLP techniques and tools like NLTK, SpaCy, or BERT for text analysis and processing. Large Language Models (LLMs): Proficiency in working with LLMs such as GPT-3, GPT-4, or similar for natural language understanding and generation tasks. Computer Vision: Knowledge of computer vision technologies and libraries such as OpenCV, YOLO, or TensorFlow Object Detection API. Preferred Experience: Proven Track Record: Demonstrated success in scaling businesses or leading teams through significant growth phases, showcasing your ability to drive impactful results. AI Expertise: Deep familiarity with the latest AI tools and technologies, including Generative AI applications, with a passion for staying at the forefront of technological advancements. Startup Savvy: Hands-on experience in early-stage startups, with a proven ability to navigate the unique challenges and seize the opportunities that come with building a company from the ground up. Finance Industry Insight: Extensive experience in the finance industry, with a comprehensive understanding of its dynamics, challenges, and opportunities, enabling you to drive innovation and deliver exceptional value to clients. Why Join Us: Opportunity to work closely with experienced founders and learn from their entrepreneurial journey. Make a significant impact on a growing company and shape its future. Collaborative and innovative work environment. Lucrative compensation package including competitive salary and equity options.

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Greater Kolkata Area

On-site

Intelligent Image Management Inc (IIMI) is an IT Services company reimagines and digitizes data through document automation using modern, cloud-native app development. As one of the world's leading multinational IT services companies with offices in the USA and Singapore, India, Sri Lanka, Bangladesh, Nepal and Kenya. Over 7,000 people are employed by IIMI worldwide whose mission is to advance data process automation. US and European Fortune 500 companies are among our clients. Become part of a team that puts its people first. Founded in 1996, Intelligent Image Management Inc. has always believed in its people. We strive to foster an environment where all feel welcome, supported, and empowered to be innovative and reach their full potential. Website: https://www.iimdirect.com/ About the Role: We are looking for a highly experienced and driven Senior Data Scientist to join our advanced AI and Data Science team. You will play a key role in building and deploying machine learning models—especially in the areas of computer vision, document image processing, and large language models (LLMs) . This role requires a combination of hands-on technical skills and the ability to design scalable ML solutions that solve real-world business problems. Key Responsibilities: Design and develop end-to-end machine learning pipelines, from data preprocessing and feature engineering to model training, evaluation, and deployment. Lead complex ML projects using deep learning, computer vision, and document analysis methods (e.g., object detection, image classification, segmentation, layout analysis). Build solutions for document image processing using tools like Google Cloud Vision, AWS Textract , and OCR libraries. Apply LLMs (Large Language Models), both open-source (e.g., LLaMA, Mistral, Falcon, GPT-NeoX) and closed-source (e.g., OpenAI GPT, Claude, Gemini), to automate text understanding, extraction, summarization, classification, and question-answering tasks. Integrate LLMs into applications for intelligent document processing, NER, semantic search, embeddings, and chat-based interfaces. Use Python (along with libraries such as OpenCV, PyTorch, TensorFlow, Hugging Face Transformers) and for building scalable, multi-threaded data processing pipelines. Implement and maintain ML Ops practices using tools such as MLflow, AWS SageMaker, GCP AI Platform , and containerized deployments. Collaborate with engineering and product teams to embed ML models into scalable production systems. Stay up to date with emerging research and best practices in machine learning, LLMs, and document AI. Required Qualifications: Bachelor’s or master’s degree in computer science, Mathematics, Statistics, Engineering, or a related field. Minimum 5 years of experience in machine learning, data science, or AI engineering roles. Strong background in deep learning, computer vision, and document image processing . Practical experience with LLMs (open and closed source), including fine-tuning, prompt engineering, and inference optimization. Solid grasp of MLOps , model versioning, and model lifecycle management. Expertise in Python , with strong knowledge of ML and CV libraries. Experience with Java and multi-threading is a plus. Familiarity with NLP tasks including Named Entity Recognition , classification, embeddings , and text summarization . Experience with cloud platforms (AWS/GCP) and their ML toolkits Preferred Skills: • Experience with retrieval-augmented generation (RAG), vector databases, and LLM evaluation tools. • Exposure to CI/CD for ML workflows and best practices in production ML. • Ability to mentor junior team members and lead cross-functional AI projects. Work Location: Work from Office Send cover letter, complete resume, and references to email: tech.jobs@iimdirect.com Industry: Outsourcing/Offshoring Employment Type Full-time

Posted 3 weeks ago

Apply

7.0 - 10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Location-Pune/Bangalore/Noida/Hyderabad/Mumbai/Chennai Experience -7-10 Yrs. Shift-UK Shift JD • Bachelor’s degree in computer science, engineering, or a related field, or equivalent practical experience with at least 7-10 years of combined experience as a Python and MLOps Engineer or similar roles. • Strong programming skills in Python. • Proficiency with AWS and/or Azure cloud platforms, including services such as EC2, S3, Lambda, SageMaker, Azure ML, etc. • Solid understanding of API programming and integration. • Hands-on experience with CI/CD pipelines, version control systems (e.g., git), and code repositories. • Knowledge of containerization using Docker, Kubernetes and orchestration tools. • Proficiency in creating data visualizations specifically for graphs and networks using tools like Matplotlib, Seaborn, or Plotly. • Understanding of data manipulation and analysis using libraries such as Pandas and NumP0079. • Problem-solving, analytical expertise, and troubleshooting abilities with attention to details. • Demonstrates VACC (Visionary, Catalyst, Architect, Coach) leadership behaviors: • Good self-awareness as well as system awareness. • Pro-actively asks for and gives feedback. • Strives to demonstrate strategic thinking as well as good business and external trends insights. • Focuses on outcomes defines and deliver highest pipeline, team, talent, and organizational impact outcomes.

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

On-site

About Us: Soul AI is a pioneering company founded by IIT Bombay and IIM Ahmedabad alumni, with a strong founding team from IITs, NITs, and BITS. We specialize in delivering high-quality human-curated data and AI-first scaled operations services. Based in San Francisco and Hyderabad, we are a fast-moving team on a mission to build AI for Good, driving innovation and societal impact. Role Overview :We are seeking a skilled AI/ML Engineer to join our client’s team (Top Tier Consulting Fir m) and operationalize our machine learning workflows. You will work closely with data scientists, engineers, and product teams to design, deploy, monitor, and maintain scalable ML pipelines in production. The ideal candidate has a strong foundation in ML systems, DevOps principles, and cloud-native technologies . Key Responsibilitie s:Collaborate with cross-functional teams to define ML problem statements and translate them into technical task s.Design and implement robust data pipelines for collecting, cleaning, and validating large dataset s.Develop, train, and evaluate machine learning models using appropriate algorithms and frameworks (e.g., scikit-learn, TensorFlow, PyTorch ).Package and deploy ML models as scalable services or APIs, ensuring performance, security, and reliabilit y.Monitor and maintain models in production, including retraining and performance tunin g.Implement best practices in MLOps: experiment tracking, versioning, CI/CD pipelines, and model monitorin g.Document methodologies, workflows, and technical decisions clearly for both technical and non-technical audience s.Stay up to date with industry trends and contribute to evaluating and adopting new tools or frameworks where relevan t.Required Skill s:Strong programming skills in Python and experience with ML libraries (scikit-learn, PyTorch, TensorFlow ).Solid understanding of machine learning fundamentals, including data preprocessing, feature engineering, model selection, and evaluatio n.Hands-on experience with SQL and data manipulation using tools like panda s.Experience deploying models in production environments, including serving models via REST API s.Familiarity with containerization (Docker) and orchestration (Kubernetes is a plus ).Knowledge of cloud platforms (AWS, GCP, or Azure) and experience with relevant ML tools (e.g., SageMaker, Vertex AI) is a plu s.Good understanding of software engineering best practices: version control, testing, code review s.Nice-to-Hav e:Experience with big data frameworks (e.g., Spark) for processing large dataset s.Knowledge of experiment tracking tools (e.g., MLflow, Weights & Biases ).Familiarity with MLOps workflows and tools for monitoring data/model drif t.Domain-specific expertise in NLP, Computer Vision, or Time Series Analysi s.Educational Qualification s:Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related fiel d.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. PwC US - Acceleration Center is seeking a highly skilled MLOps/LLMOps Engineer who will play a critical role in the deployment, scaling, and maintenance of Generative AI models. This position involves close collaboration with data scientists, ML/GenAI engineers, and DevOps teams to ensure seamless integration and operation of GenAI models within production environments at PwC as well as our clients. The ideal candidate will have a strong background in MLOps practices, along with experience and interest in Generative AI technologies. Years of Experience: Candidates with 4+ years of hands on experience Core Qualifications 3+ years of hands-on experience developing and deploying AI models in production environments with 1 year of experience in developing proofs of concept and prototypes Strong background in software development, with experience in building and maintaining scalable, distributed systems Strong programming skills in languages like Python and familiarity in ML frameworks and libraries (e.g., TensorFlow, PyTorch) Knowledge of containerization and orchestration tools like Docker and Kubernetes. Familiarity with cloud platforms (AWS, GCP, Azure) and their ML/AI service offerings Experience with continuous integration and delivery tools such as Jenkins, GitLab CI/CD, or CircleCI. Experience with infrastructure as code tools like Terraform or CloudFormation. Technical Skills Must to Have: Proficiency with MLOps tools such as MLflow, Kubeflow, Airflow, or similar for managing machine learning workflows and lifecycle. Practical understanding of generative AI frameworks (e.g., HuggingFace Transformers, OpenAI GPT, DALL-E) Expertise in containerization technologies like Docker and orchestration tools such as Kubernetes for scalable model deployment. Expertise in MLOps and LLMOps practices, including CI/CD for ML models Strong knowledge of one or more cloud-based AI services (e.g., AWS SageMaker, Azure ML, Google Vertex AI) Nice To Have Experience with advanced GenAI applications such as natural language generation, image synthesis, and creative AI. Familiarity with experiment tracking and model registry tools. Knowledge of high-performance computing and parallel processing techniques. Contributions to open-source MLOps or GenAI projects. Key Responsibilities Develop and implement MLOps strategies tailored for Generative AI models to ensure robustness, scalability, and reliability. Design and manage CI/CD pipelines specialized for ML workflows, including the deployment of generative models such as GANs, VAEs, and Transformers. Monitor and optimize the performance of AI models in production, employing tools and techniques for continuous validation, retraining, and A/B testing. Collaborate with data scientists and ML researchers to understand model requirements and translate them into scalable operational frameworks. Implement best practices for version control, containerization, infrastructure automation, and orchestration using industry-standard tools (e.g., Docker, Kubernetes). Ensure compliance with data privacy regulations and company policies during model deployment and operation. Troubleshoot and resolve issues related to ML model serving, data anomalies, and infrastructure performance. Stay up-to-date with the latest developments in MLOps and Generative AI, bringing innovative solutions to enhance our AI capabilities. Project Delivery Design and implement scalable and reliable deployment pipelines for ML/GenAI models to move them from development to production environments Ensure models are deployed with appropriate versioning and rollback mechanisms to maintain stability and ease of updates. Oversee the cloud infrastructure setup, automated data ingestion pipelines, ensuring they meets the needs of GenAI workloads in terms of computation power, storage, and network requirements. Create detailed documentation for deployment pipelines, monitoring setups, and operational procedures to ensure transparency and ease of maintenance. Actively participate in retrospectives to identify areas for improvement in the deployment process. Client Engagement Collaborate with clients to understand their business needs, goals, and specific requirements for Generative AI solutions. Collaborate with solution architects to design ML/LLMOps that meet client needs Present technical approaches and results to both technical and non-technical stakeholders Conduct training sessions and workshops for client teams to help them understand, operate, and maintain the deployed AI models. Create comprehensive documentation and user guides to assist clients in managing and leveraging the Generative AI solutions effectively. Innovation And Knowledge Sharing Stay updated with the latest trends, research, and advancements in MLOps/LLMOps and Generative AI, and apply this knowledge to improve existing systems and processes. Develop internal tools and frameworks to accelerate ML/GenAI model development and deployment Mentor junior team members on MLOps/LLMOps best practices Contribute to technical blog posts and whitepapers on MLOps/LLMOps Professional And Educational Background Any graduate /BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA

Posted 3 weeks ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary Location- Pune/Bangalore/Hyderabad/Chennai/Mumbai/Noida Experience-9+ Yrs. Position Summary: Looking for a Solution Designer experienced in GenAI, AWS SageMaker, GenAI Gateway, and Bedrock, to design, implement, and optimize GenAI-based solutions. This role requires expertise in designing workflows for embedding generation, LLM-based responses, and API integrations. Should be well versed with traditional AI/ML Models. The Solution Designer will also evaluate various embedding and LLM / AI models to ensure optimal performance and accuracy for client needs. Key Responsibilities: Solution Design: Architect GenAI/AI solutions on AWS (SageMaker, GenAI Gateway, Bedrock), designing workflows for embedding generation, LLM-based document processing, and storing embeddings in vector databases (e.g., OpenSearch, Pinecone). API Integration: Configure secure API access and environment settings to enable seamless SageMaker/Bedrock integration. Model Evaluation: Assess and select suitable embedding and LLM / AI models to meet specific client requirements, ensuring performance, accuracy, and efficiency. Documentation & Collaboration: Maintain comprehensive documentation, work closely with stakeholders, and provide technical guidance on solution implementation. Skills: Technical Skills: Proficiency in AWS , API integrations, model evaluation, and LLMs. AI/ML Knowledge: Skilled in Traditional AI/ML models, NLP, prompt engineering, pattern recognition. Programming: Expertise in Python, REST APIs, and secure environment configuration. Soft Skills: Strong communication, organization, and problem-solving abilities for effective collaboration and documentation. Domain: Telecom Networks (esp Network Performance Management)

Posted 3 weeks ago

Apply

0.0 - 2.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Job Description In this Role, Your Responsibilities Will Be: Job Responsibilities include: Determine coding design requirements from function and detailed specification o Analyze software bugs and affect code repairs Design, develop, and deliver specified software features Produce usable documentation and test procedures Deal directly with the end clients to assist in software validation and deployment Explore and evaluate opportunities to integrate AI/ML capabilities into the LMS suite, particularly for predictive analytics, optimization, and automation. Who You Are: You quickly and decisively act in constantly evolving, unexpected situations. You adjust communication content and style to meet the needs of diverse partners. You always keep the end in sight; puts in extra effort to meet deadlines. You analyze multiple and diverse sources of information to define problems accurately before moving to solutions. You observe situational and group dynamics and select best-fit approach. For This Role, You Will Need: BS in Computer Science, Engineering, Mathematics or technical equivalent 0 To 2 Years Of Experience Required. Strong problem solving skills Strong Programming Skills (.NET stack, C#, ASP.NET, Web Development technologies, HTML/5, Javascript, WCF, MS SQLServer Transact-SQL). Strong communication skills (client facing). Flexibility to work harmoniously with a small development team. Familiarity with AI/ML concepts and techniques, including traditional machine learning algorithms (e.g., regression, classification, clustering) and modern Large Language Models (LLMs). Experience with machine learning libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Experience in developing and deploying machine learning models. Understanding of data preprocessing, feature engineering, and model evaluation techniques. Preferred Qualifications that Set You Apart: Experience with liquid pipeline operations or volumetric accounting a plus Knowledge of oil and gas pipeline industry, also a plus. Experience with cloud-based AI/ML services (e.g., Azure Machine Learning, AWS SageMaker, Google Cloud AI Platform) is a plus. Our Offer to You: By joining Emerson, you will be given the opportunity to make a difference through the work you do. Emerson's compensation and benefits programs are designed to be competitive within the industry and local labor markets . We also offer a comprehensive medical and insurance coverage to meet the needs of our employees. We are committed to creating a global workplace that supports diversity, equity and embraces inclusion . We welcome foreign nationals to join us through our Work Authorization Sponsorship . We attract, develop, and retain exceptional people in an inclusive environment, where all employees can reach their greatest potential . We are dedicated to the ongoing development of our employees because we know that it is critical to our success as a global company. We have established our Remote Work Policy for eligible roles to promote Work-Life Balance through a hybrid work set up where our team members can take advantage of working both from home and at the office. Safety is paramount to us, and we are relentless in our pursuit to provide a Safe Working Environment across our global network and facilities. Through our benefits, development opportunities, and an inclusive and safe work environment, we aim to create an organization our people are proud to represent. Our Commitment to Diversity, Equity & Inclusion At Emerson, we are committed to fostering a culture where every employee is valued and respected for their unique experiences and perspectives. We believe a diverse and inclusive work environment contributes to the rich exchange of ideas and diversity of thoughts, that inspires innovation and brings the best solutions to our customers. This philosophy is fundamental to living our company’s values and our responsibility to leave the world in a better place. Learn more about our Culture & Values and about Diversity, Equity & Inclusion at Emerson . If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com . WHY EMERSON Our Commitment to Our People At Emerson, we are motivated by a spirit of collaboration that helps our diverse, multicultural teams across the world drive innovation that makes the world healthier, safer, smarter, and more sustainable. And we want you to join us in our bold aspiration. We have built an engaged community of inquisitive, dedicated people who thrive knowing they are welcomed, trusted, celebrated, and empowered to solve the world’s most complex problems — for our customers, our communities, and the planet. You’ll contribute to this vital work while further developing your skills through our award-winning employee development programs. We are a proud corporate citizen in every city where we operate and are committed to our people, our communities, and the world at large. We take this responsibility seriously and strive to make a positive impact through every endeavor. At Emerson, you’ll see firsthand that our people are at the center of everything we do. So, let’s go. Let’s think differently. Learn, collaborate, and grow. Seek opportunity. Push boundaries. Be empowered to make things better. Speed up to break through. Let’s go, together. About Emerson Emerson is a global leader in automation technology and software. Through our deep domain expertise and legacy of flawless execution, Emerson helps customers in critical industries like life sciences, energy, power and renewables, chemical and advanced factory automation operate more sustainably while improving productivity, energy security and reliability. With global operations and a comprehensive portfolio of software and technology, we are helping companies implement digital transformation to measurably improve their operations, conserve valuable resources and enhance their safety. We offer equitable opportunities, celebrate diversity, and embrace challenges with confidence that, together, we can make an impact across a broad spectrum of countries and industries. Whether you’re an established professional looking for a career change, an undergraduate student exploring possibilities, or a recent graduate with an advanced degree, you’ll find your chance to make a difference with Emerson. Join our team – let’s go!

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Senior Manager Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a SAP consulting generalist at PwC, you will focus on providing consulting services across various SAP applications to clients, analysing their needs, implementing software solutions, and offering training and support for effective utilisation of SAP applications. Your versatile knowledge will allow you to assist clients in optimising operational efficiency and achieving their strategic objectives. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations Job Description & Summary: A career within…. Responsibilities: AI Architecture & Development · Design and implement generative AI models (e.g., Transformers, GANs, VAEs, Diffusion Models). · Architect Retrieval-Augmented Generation (RAG) systems and multi-agent frameworks. · Fine-tune pre-trained models for domain-specific tasks (e.g., NLP, vision, genomics). · Ensure model scalability, performance, and interpretability. System Integration & Deployment · Integrate AI models into full-stack applications using modern frameworks (React, Node.js, Django). · Deploy models using cloud platforms (AWS SageMaker, Azure ML, GCP Vertex AI). · Implement CI/CD pipelines and containerization (Docker, Kubernetes). Collaboration & Leadership · Work with data scientists, engineers, and domain experts to translate business/scientific needs into AI solutions. · Lead architectural decisions across model lifecycle: training, deployment, monitoring, and versioning. · Provide technical mentorship and guidance to junior team members. Compliance & Documentation · Ensure compliance with data privacy standards (HIPAA, GDPR). · Maintain comprehensive documentation for models, systems, and workflows. - Required Qualifications: · Bachelor’s or Master’s in Computer Science, Engineering, Data Science, or related field. · 5+ years in AI/ML development; 3+ years in architecture or technical leadership roles. · Proficiency in Python, JavaScript, and frameworks like TensorFlow, PyTorch. · Experience with cloud platforms (AWS, Azure, GCP) and DevOps tools. · Strong understanding of NLP, computer vision, or life sciences applications. - Preferred Qualifications: · Experience in domains like marketing, capital markets, or life sciences (e.g., drug discovery, genomics). · Familiarity with Salesforce Einstein and other enterprise AI tools. · Knowledge of regulatory standards (FDA, EMA) and ethical AI practices. · Experience with multimodal data (text, image, genomic, clinical). Mandatory skill sets: Gen AI Architect Preferred skill sets: Gen AI Years of experience required: 10+yrs Education qualification: Btech MBA MCA MTECH Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Chartered Accountant Diploma, Bachelor of Engineering, Master of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills SAP Gen AI Hub Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Application Software, Business Model Development, Business Process Modeling, Business Systems, Coaching and Feedback, Communication, Creativity, Developing Training Materials, Embracing Change, Emerging Technologies, Emotional Regulation, Empathy, Enterprise Integration, Enterprise Software, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Influence, Innovative Design, Intellectual Curiosity {+ 26 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 3 weeks ago

Apply

0 years

6 - 8 Lacs

Hyderābād

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory , our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant - MLOps Engineer! In this role, lead the automation and orchestration of our machine learning infrastructure and CI/CD pipelines on public cloud (preferably AWS). This role is essential for enabling scalable, secure, and reproducible deployments of both classical AI/ML models and Generative AI solutions in production environments. Responsibilities Develop and maintain CI/CD pipelines for AI/GenAI models on AWS using GitHub Actions and CodePipeline. (Not Limited to) Automate infrastructure provisioning using IAC. (Terraform, Bicep Etc) Any cloud platform - Azure or AWS Package and deploy AI/ GenAI models on (SageMaker, Lambda, API Gateway). Write Python scripts for automation, deployment, and monitoring. Engaging in the design, development and maintenance of data pipelines for various AI use cases Active contribution to key deliverables as part of an agile development team Set up model monitoring, logging, and alerting (e.g., drift, latency, failures). Ensure model governance, versioning, and traceability across environments. Collaborating with others to source, analyse , test and deploy data processes Experience in GenAI project Qualifications we seek in you! Minimum Qualifications experience with MLOps practices. Degree/qualification in Computer Science or a related field, or equivalent work experience Experience developing, testing, and deploying data pipelines Strong Python programming skills. Hands-on experience in deploying 2 - 3 AI/ GenAI models in AWS. Familiarity with LLM APIs (e.g., OpenAI, Bedrock) and vector databases. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience with Docker-based deployments. Exposure to model monitoring tools (Evidently, CloudWatch). Familiarity with RAG stacks or fine-tuning LLMs. Understanding of GitOps practices. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Principal Consultant Primary Location India-Hyderabad Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jul 10, 2025, 6:48:24 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 3 weeks ago

Apply

25.0 years

2 - 4 Lacs

Cochin

On-site

Company Overview Milestone Technologies is a global IT managed services firm that partners with organizations to scale their technology, infrastructure and services to drive specific business outcomes such as digital transformation, innovation, and operational agility. Milestone is focused on building an employee-first, performance-based culture and for over 25 years, we have a demonstrated history of supporting category-defining enterprise clients that are growing ahead of the market. The company specializes in providing solutions across Application Services and Consulting, Digital Product Engineering, Digital Workplace Services, Private Cloud Services, AI/Automation, and ServiceNow. Milestone culture is built to provide a collaborative, inclusive environment that supports employees and empowers them to reach their full potential. Our seasoned professionals deliver services based on Milestone’s best practices and service delivery framework. By leveraging our vast knowledge base to execute initiatives, we deliver both short-term and long-term value to our clients and apply continuous service improvement to deliver transformational benefits to IT. With Intelligent Automation, Milestone helps businesses further accelerate their IT transformation. The result is a sharper focus on business objectives and a dramatic improvement in employee productivity. Through our key technology partnerships and our people-first approach, Milestone continues to deliver industry-leading innovation to our clients. With more than 3,000 employees serving over 200 companies worldwide, we are following our mission of revolutionizing the way IT is deployed. Job Overview We are seeking a Full Stack Developer with minimum of 5+ years of experience in Python, React, and AI/ML, who also has hands-on experience with application hosting on cloud platforms (VMs, App Services, Containers). This is a lead role where you will guide a team of 5 developers and work on building and deploying modern, intelligent web applications using AWS, Azure, and scalable backend/frontend architecture. Responsibilities: Lead a team of 5 engineers across backend, frontend, and AI/ML components. Design and develop scalable full stack solutions using Python (FastAPI/Django/Flask) and React.js. Deploy and host applications using VMs (EC2, Azure VMs), App Services, and Containers (Docker/K8s). Integrate and operationalize ML/LLM models into production systems. Own infrastructure setup for CI/CD, application monitoring, and secure deployments. Collaborate cross-functionally with data scientists, DevOps engineers, and business stakeholders. Conduct code reviews, lead sprint planning, and ensure delivery velocity. - Tech Stack & Tools: Frontend: React, Redux, Tailwind CSS / Material UI Backend: Python (FastAPI/Django/Flask), REST APIs AI/ML: scikit-learn, TensorFlow, PyTorch, Hugging Face, LangChain LLM : Azure Open AI , Cohere Cloud: o AWS: EC2, Lambda, S3, RDS, SageMaker, EKS, Elastic Beanstalk o Azure: App Services, AKS, Azure ML, Azure Functions, Azure VMs LLM: o OpenAI / Azure OpenAI (GPT-4, GPT-3.5) o Hugging Face Transformers o LangChain / LlamaIndex / Haystack o Vector DBs: Croma , Pinecone, FAISS, Weaviate, Qdrant o RAG (Retrieval Augmented Generation) pipelines App Hosting: VMs (EC2, Azure VMs), Azure App Services, Docker, Kubernetes Database: PostgreSQL, MongoDB, Redis DevOps: GitHub Actions, Jenkins, Terraform (optional), Monitoring (e.g., Prometheus, Azure Monitor) Tools: Git, Jira, Confluence, Slack - Key Requirements: 5–8 years of experience in full stack development with Python and React Proven experience in deploying and managing applications on VMs, App Services, Docker/Kubernetes Strong cloud experience on both AWS and Azure platforms Solid understanding of AI/ML integration into web apps (end-to-end lifecycle) Experience leading small engineering teams and delivering high-quality products Strong communication, collaboration, and mentoring skills LLM and Generative AI exposure (OpenAI, Azure OpenAI, RAG pipelines) Familiarity with vector search engines Microservices architecture and message-driven systems (Kafka/Event Grid) Security-first mindset and hands-on with authentication/authorization flows Lead Full Stack Developer – Python, React, AI/ML Location: Kochi Experience: 5+ years Team Leadership: Yes, team of 5 developers Employment Type: Full-time Compensation Estimated Pay Range: Exact compensation and offers of employment are dependent on circumstances of each case and will be determined based on job-related knowledge, skills, experience, licenses or certifications, and location. Our Commitment to Diversity & Inclusion At Milestone we strive to create a workplace that reflects the communities we serve and work with, where we all feel empowered to bring our full, authentic selves to work. We know creating a diverse and inclusive culture that champions equity and belonging is not only the right thing to do for our employees but is also critical to our continued success. Milestone Technologies provides equal employment opportunity for all applicants and employees. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of race, color, religion, gender, gender identity, marital status, age, disability, veteran status, sexual orientation, national origin, or any other category protected by applicable federal and state law, or local ordinance. Milestone also makes reasonable accommodations for disabled applicants and employees. We welcome the unique background, culture, experiences, knowledge, innovation, self-expression and perspectives you can bring to our global community. Our recruitment team is looking forward to meeting you.

Posted 3 weeks ago

Apply

12.0 years

4 - 9 Lacs

Gurgaon

On-site

We are looking for a Principal Technical Consultant – Data Engineering & AI who can lead modern data and AI initiatives end-to-end — from enterprise data strategy to scalable AI/ML solutions and emerging Agentic AI systems. This role demands deep expertise in cloud-native data architectures, advanced machine learning, and AI solution delivery, while also staying at the frontier of technologies like LLMs, RAG pipelines, and AI agents. You’ll work with C-level clients to translate AI opportunities into engineered outcomes. Roles and Responsibilities AI Solution Architecture & Delivery: Design and implement production-grade AI/ML systems, including predictive modeling, NLP, computer vision, and time-series forecasting. Architect and operationalize end-to-end ML pipelines using MLflow, SageMaker, Vertex AI, or Azure ML — covering feature engineering, training, monitoring, and CI/CD. Deliver retrieval-augmented generation (RAG) solutions combining LLMs with structured and unstructured data for high-context enterprise use cases. Data Platform & Engineering Leadership: Build scalable data platforms with modern lakehouse patterns using: Ingestion: Kafka, Azure Event Hubs, Kinesis Storage & Processing: Delta Lake, Iceberg, Snowflake, BigQuery, Spark, dbt Workflow Orchestration: Airflow, Dagster, Prefect Infrastructure: Terraform, Kubernetes, Docker, CI/CD pipelines Implement observability and reliability features into data pipelines and ML systems. Agentic AI & Autonomous Workflows (Emerging Focus): Explore and implement LLM-powered agents using frameworks like LangChain, Semantic Kernel, AutoGen, or CrewAI. Develop prototypes of task-oriented AI agents capable of planning, tool use, and inter-agent collaboration for domains such as operations, customer service, or analytics automation. Integrate agents with enterprise tools, vector databases (e.g., Pinecone, Weaviate), and function-calling APIs to enable context-rich decision making. Governance, Security, and Responsible AI: - Establish best practices in data governance, access controls, metadata management, and auditability. Ensure compliance with security and regulatory requirements (GDPR, HIPAA, SOC2). Champion Responsible AI principles including fairness, transparency, and safety. Consulting, Leadership & Practice Growth: Lead large, cross-functional delivery teams (10–30+ FTEs) across data, ML, and platform domains. Serve as a trusted advisor to clients’ senior stakeholders (CDOs, CTOs, Heads of AI). Mentor internal teams and contribute to the development of accelerators, reusable components, and thought leadership. Key Skills 12+ years of experience across data platforms, AI/ML systems, and enterprise solutioning Cloud-native design experience on Azure, AWS, or GCP Expert in Python, SQL, Spark, ML frameworks (scikit-learn, PyTorch, TensorFlow) Deep understanding of MLOps, orchestration, and cloud AI tooling Hands-on with LLMs, vector DBs, RAG pipelines, and foundational GenAI principles Strong consulting acumen: client engagement, technical storytelling, stakeholder alignment Qualifications Master’s or PhD in Computer Science, Data Science, or AI/ML Certifications: Azure AI-102, AWS ML Specialty, GCP ML Engineer, or equivalent Exposure to agentic architectures, LLM fine-tuning, or multi-agent collaboration frameworks Experience with open-source contributions, conference talks, or whitepapers in AI/Data

Posted 3 weeks ago

Apply

16.0 years

2 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines. Knows & brings in external ML frameworks and libraries. Consistently avoids common pitfalls in model development and deployment. HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customer Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 8+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 3 weeks ago

Apply

5.0 years

15 - 20 Lacs

Thiruvananthapuram Taluk, India

Remote

Are you passionate about building AI systems that create real-world impact? We are hiring a Senior AI Engineer with 5+ years of experience to design, develop, and deploy cutting-edge AI/ML solutions. 📍 Location: [Trivandrum / Kochi / Remote – customize based on your need] 💼 Experience: 5+ years 💰 Salary: ₹15–20 LPA 🚀 Immediate Joiners Preferred 🔧 What You’ll Do Design and implement ML/DL models for real business problems Build data pipelines and perform preprocessing for large datasets Use advanced techniques like NLP, computer vision, reinforcement learning Deploy AI models using MLOps best practices Collaborate with data scientists, developers & product teams Stay ahead of the curve with the latest research and tools ✅ What We’re Looking For 5+ years of hands-on AI/ML development experience Strong in Python, with experience in TensorFlow, PyTorch, Scikit-learn, Hugging Face Knowledge of NLP, CV, DL architectures (CNNs, RNNs, Transformers) Experience with cloud platforms (AWS/GCP/Azure) and AI services Solid grasp of MLOps, model versioning, deployment, monitoring Strong problem-solving, communication, and mentoring skills 💻 Tech Stack You’ll Work With Languages: Python, SQL Libraries: TensorFlow, PyTorch, Keras, Transformers, Scikit-learn Tools: Git, Docker, Kubernetes, MLflow, Airflow Platforms: AWS, GCP, Azure, Vertex AI, SageMaker Skills: cloud platforms (aws, gcp, azure),docker,computer vision,git,pytorch,airflow,hugging face,nlp,ml,ai,deep learning,kubernetes,mlflow,mlops,tensorflow,scikit-learn,python,machine learning

Posted 3 weeks ago

Apply

7.0 years

0 Lacs

Indore, Madhya Pradesh, India

On-site

Job Title: Python Developer (AI/ML Projects – 5–7 Years Experience) Location: Onsite – Indore, India Job Type: Full-time Experience Required: 5 to 7 years Notice Period: Immediate Joiners Preferred About the Role: We are seeking an experienced Python Developer with 5–7 years of professional experience, including hands-on project work in Artificial Intelligence (AI) and Machine Learning (ML) . The ideal candidate should have strong backend development skills along with a solid foundation in AI/ML, capable of designing scalable solutions and deploying intelligent systems. Key Responsibilities: Design, develop, and maintain backend applications using Python. Build and integrate RESTful APIs and third-party services. Work on AI/ML projects including model development, training, deployment, and performance tuning. Collaborate with Data Scientists and ML Engineers to implement and productionize machine learning models. Manage data pipelines and model lifecycle using tools like MLflow or similar. Write clean, testable, and efficient code using Python best practices. Work with relational and NoSQL databases such as PostgreSQL, MySQL, MongoDB, etc. Participate in code reviews, architecture discussions, and agile ceremonies. Required Skills & Experience: 5–7 years of hands-on Python development experience. Strong experience with frameworks such as Django, Flask, or FastAPI. Proven track record of working on AI/ML projects (end-to-end model lifecycle). Good understanding of machine learning libraries like Scikit-learn, TensorFlow, Keras, PyTorch, etc. Experience with data preprocessing, model training, evaluation, and deployment. Familiarity with data handling tools: Pandas, NumPy, etc. Working knowledge of REST API development and integration. Experience with Docker, Git, CI/CD, and cloud platforms (AWS/GCP/Azure). Familiarity with databases – SQL and NoSQL. Experience with model tracking tools like MLflow or DVC is a plus. Preferred Qualifications: Bachelor's or Master’s degree in Computer Science, Engineering, or a related field. Experience with cloud-based AI/ML services (AWS SageMaker, Azure ML, GCP AI Platform). Exposure to MLOps practices and tools is highly desirable. Understanding of NLP, Computer Vision, or Generative AI concepts is a plus.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies