Jobs
Interviews

1576 Sagemaker Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

5 - 9 Lacs

Noida

On-site

Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Strong experience in Python 2+ years’ experience of working on feature/data pipelines using PySpark Understanding and experience around data science Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice Experience with statistical models e.g., multinomial logistic regression Experience of technical architecture, design, deployment, and operational level knowledge Exploratory Data Analysis Knowledge around Model building, Hyperparameter tuning and Model performance metrics. Statistics Knowledge (Probability Distributions, Hypothesis Testing) Time series modelling, Forecasting, Image/Video Analytics, and Natural Language Processing (NLP). Good To Have: Experience researching and ing large language and Generative AI models. Experience with LangChain, LLAMAIndex, Foundation model tuning, Data Augmentation, and Performance Evaluation frameworks Able to provide analytical expertise in the process of model development, refining, and implementation in a variety of analytics problems. Knowledge on Docker and Kubernetes. Skills Required : Machine Learning, Natural Language Processing , AWS Sagemaker, Python Role : Generate actionable insights for business improvements. Ability to understand business requirements. Write clean, efficient, and reusable code following best practices. Troubleshoot and debug applications to ensure optimal performance. Write unit test cases Collaborate with cross-functional teams to define and deliver new features Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Experience ing theoretical models in an applied environment. MLOps, Data Pipeline, Data engineering Statistics Knowledge (Probability Distributions, Hypothesis Testing) Experience : 4 to 5 years Job Reference Number : 13027

Posted 1 month ago

Apply

2.0 - 4.0 years

2 - 8 Lacs

Noida

On-site

Noida, Uttar Pradesh, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : 2-4 years of experience in designing, developing, and training machine learning models using diverse algorithms and techniques, including deep learning, NLP, computer vision, and time series analysis. Proven ability to optimize model performance through experimentation with architectures, hyperparameter tuning, and evaluation metrics. Hands-on experience in processing large datasets, including preprocessing, feature engineering, and data augmentation. Demonstrated ability to deploy trained AI/ML models to production using frameworks like Kubernetes and cloud-based ML platforms Solid understanding of monitoring and logging for performance tracking. Experience in exploring new AI/ML methodologies and documenting the development and deployment lifecycle, including performance metrics. Familiarity with AWS services, particularly SageMaker, is expected. Excellent communication, presentation, and interpersonal skills are essential. Good to have: Knowledge of GenAI (LangChain, Foundation model tuning, and GPT3) Amazon AWS Certified Machine Learning - Specialty certifications Skills Required : Machine Learning, Langchain, AWS Sagemaker, Python Role : Explore different models and transform data science prototypes for given problem Analyze dataset perform data enrichment, feature engineering and model training Abale to write code using Python, Pandas and Dataframe APIs Develop machine learning applications according to requirements Perform statistical analysis and fine-tuning using test results Collaborate with data engineers & architects to implement and deploy scalable solutions. Encourage continuous innovation and out-of-the-box thinking. Experience ing theoretical models in an applied environment. Experience : 1 to 3 years Job Reference Number : 13047

Posted 1 month ago

Apply

16.0 years

1 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 1 month ago

Apply

12.0 years

5 - 6 Lacs

Indore

On-site

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 1 month ago

Apply

10.0 years

0 Lacs

Kochi, Kerala, India

On-site

Role Description Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch. Develop Infrastructure as Code (IaC) using Terraform, and automate deployments with CI/CD pipelines. Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle—from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases. Implement frameworks for bias detection, explainability, and responsible AI. Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards, and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring, ing, and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services: EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git, and CI/CD pipelines. Proven track record in delivering AI/ML projects into production environments. Deep understanding of MLOps, model versioning, monitoring, and retraining pipelines. Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain, LLMs, Kubeflow, or GCP-based AI services. Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less

Posted 1 month ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Job Title: AI Lead - Video & Image Analytics, GenAI & NLP (AWS) Location: Chennai Company: Datamoo AI About Us: Datamoo AI is an innovative AI-driven company focused on developing cutting-edge solutions in workforce and contract management, leveraging AI for automation, analytics, and optimization. We are building intelligent systems that enhance business efficiency through advanced AI models in video analytics, image processing, Generative AI, and NLP, deployed on AWS. Job Overview: We are seeking a highly skilled and experienced AI Lead to drive the development of our AI capabilities. This role requires expertise in video analytics, image analytics, Generative AI, and Natural Language Processing (NLP), along with hands-on experience in deploying AI solutions on AWS. The AI Lead will be responsible for leading a team of AI engineers, researchers, and data scientists, overseeing AI strategy, and ensuring the successful execution of AI-powered solutions. Key Responsibilities: Lead and mentor a team of AI engineers and data scientists to develop innovative AI-driven solutions. Design and implement AI models for video analytics, image processing, and NLP applications. Drive the development of Generative AI applications tailored to our product needs. Optimize and deploy AI/ML models on AWS using cloud-native services like SageMaker, Lambda, and EC2. Collaborate with cross-functional teams to integrate AI solutions into Datamoo AI’s workforce and contract management applications. Ensure AI solutions are scalable, efficient, and aligned with business objectives. Stay updated with the latest advancements in AI and ML and drive adoption of new technologies where applicable. Define AI research roadmaps and contribute to intellectual property development. Required Skills & Qualifications: 4+ years of experience in AI, ML, or Data Science with a focus on video/image analytics, NLP, and GenAI. Strong hands-on experience with deep learning frameworks such as TensorFlow, PyTorch, or OpenCV. Expertise in Generative AI, including transformer models (GPT, BERT, DALL·E, etc.). Proficiency in computer vision techniques, including object detection, recognition, and tracking. Strong experience in NLP models, including text summarization, sentiment analysis, and chatbot development. Proven track record of deploying AI solutions on AWS (SageMaker, EC2, Lambda, S3, etc.). Strong leadership skills with experience in managing AI/ML teams. Proficiency in Python, SQL, and cloud computing architectures. Excellent problem-solving skills and ability to drive AI strategy and execution. Preferred Qualifications: Experience with MLOps, model monitoring, and AI governance. Knowledge of blockchain and AI-powered contract management systems. Understanding of edge AI deployment for real-time analytics. Published research papers or contributions to open-source AI projects. What We Offer: Opportunity to lead AI innovation in a fast-growing AI startup. Collaborative work environment with cutting-edge AI technologies. Competitive salary and stock options. Flexible work environment (remote/hybrid options available). Access to AI research conferences and continuous learning programs. If you are an AI expert passionate about pushing the boundaries of AI and leading a dynamic team, we’d love to hear from you! How to Apply: Send your resume and a cover letter to hr@datamoo.ai. Show more Show less

Posted 1 month ago

Apply

15.0 - 20.0 years

15 - 20 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Job description Location: Pan India Grade: E1 The Opportunity: Capgemini is seeking a Director/ Senior Director level Executive for AWS Practice Lead. This person should have: 15+ years of experience with at least 10 in Data and Analytics Domain of which minimum 3 years on big data and Cloud Multi-skilled professional with strong experience in Architecture and Advisory, Offer and Asset creation, People hiring and training. Experience on at least 3 sizeable AWS engagements spanning over 18 months as a Managing Architect / Advisor preferably both migration and cloud native implementations. Hands-on experience on at least 4 native services like EMR, S3, Glue, Lambda, RDS, Redshift, Sagemaker, Quicksight, Athena, Kinesis. Client facing with strong communication and articulation skills. Should be able engage with CXO level audiences. Must be hands-on in writing solutions, doing estimations in support of RFPs. Strong in Data Architecture and management DW, Data Lake, Data Governance, MDM. Able to translate business and technical requirements into Architectural components Nice to have: Multi-skilled professional with strong experience in Deal solutioning, Creating GTM strategy, Delivery handholding Must be aware of relevant leading tools and concepts in industry Must be flexible for short-term travel up to 3 months across countries Must be able to define new service offerings and support GTM strategy Must have exposure to initial setup of activities, including infrastructure setup like connectivity, security policies, configuration management, Devops etc. Architecture certification preferred - TOGAF or other Industry acknowledges Experience with replication, high availability, archiving, backup & restore, and disaster recovery/business continuity data best practices. Our Ideal Candidate: Strong Behavioral & Collaboration Skills Excellent verbal and written communication skills. Should be and good listener, logical and composed in explaining point of views. Ability to work in collaborative, cross-functional, and multi-cultural teams. Excellent leadership skills, with the ability to generate stakeholder buy-in and lead through influence at a senior management level. Should be very good in negotiation skills and ability to handle conflict situations.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

🔧 Job Opening: MLOps & DevOps Engineer 📍 Location : Pune, India | 🧠 Experience : 3–5 Years 🕒 Immediate Joiners Preferred Company : Asmadiya Technologies Pvt. Ltd. About Us: Asmadiya Technologies is a dynamic technology company delivering innovative solutions in AI/ML , Cloud Computing , and Digital Transformation . We are seeking an experienced MLOps & DevOps Engineer to join our engineering team and lead the development, deployment, and monitoring of machine learning systems in production. ✅ Key Responsibilities: Design and implement CI/CD pipelines for ML models and microservices Manage end-to-end MLOps workflows – from model training, versioning, deployment, to monitoring Automate infrastructure using Terraform, CloudFormation , or similar tools Integrate ML workflows with cloud platforms like AWS, Azure, or GCP Implement model registry , artifact tracking, and monitoring (e.g., MLflow, Weights & Biases) Collaborate with Data Scientists, ML Engineers, and DevOps teams to ensure scalable and reliable deployments Set up and maintain Kubernetes (EKS/AKS/GKE) clusters and orchestrate model serving Ensure compliance, security, and reliability of deployed systems Proactively identify performance bottlenecks and implement optimization 🧩 Required Skills & Experience: 3–5 years of experience in DevOps and MLOps environments Strong expertise in Docker, Kubernetes, Jenkins/GitHub Actions/GitLab CI Hands-on experience with ML lifecycle tools : MLflow, Kubeflow, SageMaker, or similar Proficient in scripting with Python , Bash , and using Linux-based systems Experience with infrastructure-as-code (Terraform, Ansible, Helm) Working knowledge of cloud platforms (AWS preferred) and model deployment at scale Familiar with observability tools like Prometheus, Grafana, or ELK stack 🌟 What We’re Looking For: A problem-solver who can bridge the gap between data science and operations A hands-on contributor who can own deployments end-to-end A collaborative team player who thrives in fast-paced, agile environments Passionate about building production-ready ML systems 📬 Apply Now: Looking to lead the charge in deploying intelligent systems at scale? Join Asmadiya Technologies as we build the future of AI. 📩 Send your resume to careers@asmadiya.com with subject: MLOps & DevOps Engineer – Pune Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a seasoned and visionary Lead AI Engineer to drive the design, development, and delivery of high-impact AI/ML solutions. As a technical leader, you will guide a team of AI developers in executing large-scale AI/ML projects, mentor them to build expertise, and foster a culture of innovation and excellence. You will collaborate with sales teams during pre-sales calls to articulate technical solutions and work closely with leadership to translate strategic vision into actionable, production-ready AI systems. Responsibilities Architect and lead the end-to-end development of impactful AI/ML models and systems, ensuring scalability, reliability, and performance. Provide hands-on technical guidance to a team of AI/ML engineers, fostering skill development and promoting best practices in coding, model design, and experimentation. Collaborate with cross-functional teams, including data scientists, product managers, and software developers, to define AI product strategies and roadmaps. Partner with the sales team during pre-sales calls to understand client needs, propose AI-driven solutions, and communicate technical feasibility. Translate leadership’s strategic vision into technical requirements and executable project plans. Design and implement scalable MLOps infrastructure for data ingestion, model training, evaluation, deployment, and monitoring. Lead research and experimentation in advanced AI domains such as NLP, computer vision, large language models (LLMs), or generative AI, tailoring solutions to business needs. Evaluate and integrate open-source or commercial AI frameworks/tools to accelerate development and ensure robust solutions. Monitor and optimize deployed models for performance, fairness, interpretability, and cost-efficiency, driving continuous improvement. Mentor and nurture new talent, building a high-performing AI team capable of delivering complex projects over time. Qualifications Bachelor’s, Master’s, or Ph.D. in Computer Science, Artificial Intelligence, or a related field. 5+ years of hands-on experience in machine learning or deep learning, with a proven track record of delivering large-scale AI/ML projects to production. Demonstrated ability to lead and mentor early-career engineers, fostering technical growth and team collaboration. Strong proficiency in Python and ML frameworks/libraries (e.g., TensorFlow, PyTorch, HuggingFace, Scikit-learn). Extensive experience deploying AI models in production environments using tools like AWS SageMaker, Google Vertex AI, Docker, Kubernetes, or similar. Solid understanding of data pipelines, APIs, MLOps practices, and software engineering principles. Experience collaborating with non-technical stakeholders (e.g., sales, leadership) to align technical solutions with business objectives. Familiarity with advanced AI domains such as NLP, computer vision, LLMs, or generative AI is a plus. Excellent communication skills to articulate complex technical concepts to diverse audiences, including clients and executives. Strong problem-solving skills, with a proactive approach to driving innovation and overcoming challenges. Show more Show less

Posted 1 month ago

Apply

0.0 - 16.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0625-0079 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Manage Consulting Expert- AI Architect Experience: 13-16 years Category: Software Development/ Engineering Shift Timing: General Shift Location: Bangalore Position ID: J0625-0079 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 13 years of relevant experience. We are looking for an experienced and visionary AI Architect with a strong engineering background and hands-on implementation experience to lead the development and deployment of AI-powered solutions. The ideal candidate will have a minimum of 13–16 years of experience in software and AI systems design, including extensive exposure to large language models (LLMs), vector databases, and modern AI frameworks such as LangChain. This role requires a balance of strategic architectural planning and tactical engineering execution, working across teams to bring intelligent applications to life. Your future duties and responsibilities: Design robust, scalable architectures for AI/ML systems, including LLM-based and generative AI solutions. Lead the implementation of AI features and services in enterprise-grade products with clear, maintainable code. Develop solutions using LangChain, orchestration frameworks, and vector database technologies. Collaborate with product managers, data scientists, ML engineers, and business stakeholders to gather requirements and translate them into technical designs. Guide teams on best practices for AI system integration, deployment, and monitoring. Define and implement architecture governance, patterns, and reusable frameworks for AI applications. Stay current with emerging AI trends, tools, and methodologies to continuously enhance architecture strategy. Oversee development of Proof-of-Concepts (PoCs) and Minimum Viable Products (MVPs) to validate innovative ideas. Ensure systems are secure, scalable, and high-performing in production environments. Mentor junior engineers and architects to build strong AI and engineering capabilities within the team. Required qualifications to be successful in this role: Must to have Skills- 13–16 years of overall experience in software development, with at least 5+ years in AI/ML system architecture and delivery. Proven expertise in developing and deploying AI/ML models in production environments. Deep knowledge of LLMs, LangChain, prompt engineering, RAG (retrieval-augmented generation), and vector search. Strong programming and system design skills with a solid engineering foundation. Exceptional ability to communicate complex concepts clearly to technical and non-technical stakeholders. Experience with Agile methodologies and cross-functional team leadership. Programming Languages: Python, Java, Scala, SQL AI/ML Frameworks: LangChain, TensorFlow, PyTorch, Scikit-learn, Hugging Face Transformers Data Processing: Apache Spark, Kafka, Pandas, Dask Vector Stores & Retrieval Systems: FAISS, Pinecone, Weaviate, Chroma Cloud Platforms: AWS (SageMaker, Lambda), Azure (ML Studio, OpenAI), Google Cloud AI MLOps & DevOps: Docker, Kubernetes, MLflow, Kubeflow, Airflow, CI/CD tools (GitHub Actions, Jenkins) Databases: PostgreSQL, MongoDB, Redis, BigQuery, Snowflake Tools & Platforms: Databricks, Jupyter Notebooks, Git, Terraform Good to have Skills- Solution Engineering and Implementation Experience in AI Project. Skills: AWS Machine Learning English GitHub Python Jenkins Kubernetes Prometheus Snowflake What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka

On-site

- Bachelor's degree in computer science, engineering, mathematics, statistics or a related field - 3+ years of data engineering experience - Experience with ML - Experience with data modeling, warehousing and building ETL pipelines - Knowledge of distributed systems - Knowledge of professional software engineering & best practices for full software development life cycle, including coding standards, software architectures, code reviews, source control management, continuous deployments, testing, and operational excellence Amazon Regulatory Intelligence Safety and Risk (RISC) team mission is to protect customers from products that are unsafe, illegal, illegally marketed, controversial or otherwise in violation of Amazon’s policies while enabling our Selling Partners to offer their broadest selection of safe and compliant products. We achieve these objectives worldwide by: (1) taking a science-first approach to offer trustworthy listings to our customers, (2) inventing intuitive and precise tools to simplify our selling partners’ compliance journey and (3) innovating to reduce our cost to serve. The RISC Data Engineering team is seeking an experienced Data Engineer with solid engineering skills and machine learning background (MLOps) to join our team. In this role, you will be responsible for designing, building, and maintaining large scale robust data pipelines and infrastructure to empower our machine learning, data science and analytics initiatives. You will collaborate closely with Applied Scientists, Machine Learning Scientists, and business stakeholders to understand their requirements and support AI/ML solutions. Join our expert team to build scalable data solutions, improving Amazon business efficiency and simplifying our selling partners' compliance journey. Key job responsibilities 1. Design, build, and maintain scalable, fault-tolerant, and efficient data pipelines and infrastructure for machine learning operations (MLOps) leveraging AWS technologies such as Lambda, Glue, EMR/Spark, Step Functions, Airflow, DynamoDB and AWS Batch. 2. Automate infrastructure deployment, maintenance processes, and incorporate CI/CD principles to streamline the MLOps ecosystem, using AWS services and scripting languages like Python or Scala. 3. Develop optimized data models, ETL/ELT processes, data transformations, and data warehouse to ensure high-quality, well-structured data for ML and analytics, using S3, Redshift, Glue, Athena and Lake Formation. 4. Collaborate closely with Applied Scientists, Machine Learning Scientists, and analytics teams to understand data requirements, and provide scalable data solutions. 5. Adopt genAI solutions to transform and enhance data engineering and MLOps processes. 6. Continuously monitor, optimize, and enhance data pipelines, processes, and infrastructure to support ML and analytics. 7. Implement and enforce rigorous data governance, security, and compliance standards for our data, including data validation, cleansing, and lineage tracking. 8. Mentor junior engineers, promoting best practices and knowledge sharing in data engineering and MLOps. 9. Stay updated with emerging technologies, tools, and trends, incorporating them into the existing ecosystem for continuous improvement. About the team Who Are We We are a team of scientists and engineers building AI/ML and data solutions to improve Amazon business efficiency and simplify our selling partners' compliance journey. Work/Life Balance We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. Mentorship and Career Growth We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, Step Functions, Airflow, DynamoDB and AWS Batch, SageMaker, IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Experience with advanced ML system design, implementation and maintenance Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS Strong problem-solving and engineering skills, with the ability to translate business requirements into technical solutions Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.

Posted 1 month ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 month ago

Apply

16.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 4+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Looking for a skilled Senior Data Science Engineer with 6-12 years of experience to lead the development of advanced computer vision models and systems. The ideal candidate will have hands-on experience with state-of-the-art architectures and a deep understanding of the complete ML lifecycle. This position is based in Bengaluru. Roles and Responsibility Lead the development and implementation of computer vision models for tasks such as object detection, tracking, image retrieval, and scene understanding. Design and execute end-to-end pipelines for data preparation, model training, evaluation, and deployment. Perform fine-tuning and transfer learning on large-scale vision-language models to meet application-specific needs. Optimize deep learning models for edge inference (NVIDIA Jetson, TensorRT, OpenVINO) and real-time performance. Develop scalable and maintainable ML pipelines using tools such as MLflow, DVC, and Kubeflow. Automate experimentation and deployment processes using CI/CD workflows. Collaborate cross-functionally with MLOps, backend, and product teams to align technical efforts with business needs. Monitor, debug, and enhance model performance in production environments. Stay up-to-date with the latest trends in CV/AI research and rapidly prototype new ideas for real-world use. Job Requirements 6-7+ years of hands-on experience in data science and machine learning, with at least 4 years focused on computer vision. Strong experience with deep learning frameworks: PyTorch (preferred), TensorFlow, Hugging Face Transformers. In-depth understanding and practical experience with Class-incremental learning and lifelong learning systems. Proficient in Python, including data processing libraries like NumPy, Pandas, and OpenCV. Strong command of version control and reproducibility tools (e.g., MLflow, DVC, Weights & Biases). Experience with training and optimizing models for GPU inference and edge deployment (Jetson, Coral, etc.). Familiarity with ONNX, TensorRT, and model quantization/conversion techniques. Demonstrated ability to analyze and work with large-scale visual datasets in real-time or near-real-time systems. Experience working in fast-paced startup environments with ownership of production AI systems. Exposure to cloud platforms such as AWS (SageMaker, Lambda), GCP, or Azure for ML workflows. Experience with video analytics, real-time inference, and event-based vision systems. Familiarity with monitoring tools for ML systems (e.g., Prometheus, Grafana, Sentry). Prior work in domains such as retail analytics, healthcare, or surveillance/IoT-based CV applications. Contributions to open-source computer vision libraries or publications in top AI/ML conferences (e.g., CVPR, NeurIPS, ICCV). Comfortable mentoring junior engineers and collaborating with cross-functional stakeholders.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: DevOps Engineer – AI/ML Infrastructure Location: Meril Healthcare Pvt. Ltd, IITM research park, Chennai, Parent company – Meril (https://www.merillife.com/) . Shift: General shift - Monday to Saturday (9.30 am to 6.00 pm). Summary We are seeking a skilled DevOps Engineer with expertise in managing cloud-based AI/ML infrastructure, automation, CI/CD pipelines, and containerized deployments. The ideal candidate will work on AWS-based AI model deployment , database management , API integrations , and scalable infrastructure for AI inference workloads. Experience in ML model serving (MLflow, TensorFlow Serving, Triton Inference Server, BentoML) and on- prem/cloud DevOps will be highly valued. Key Responsibilities Cloud s Infrastructure Management Manage and optimize cloud infrastructure on AWS (SageMaker, EC2, Lambda, RDS, DynamoDB, S3, CloudFormation) . Design, implement, and maintain highly available, scalable AI/ML model deployment pipelines . Set up Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible . CI/CD s Automation Develop and manage CI/CD pipelines using GitLabCI/CD, Jenkins, and AWS CodeBuild . Automate deployment of AI models and applications using Docker, Kubernetes (EKS) . Write automation scripts in Bash, Python, or PowerShell for system tasks. APIs AI Model Deployment Deploy and manage Flask/FastAPI-based APIs for AI inference. Optimize ML model serving using MLflow, TensorFlow Serving, Triton Inference Server, and BentoML . Implement monitoring for AI workloads to ensure inference reliability and performance. Security, Monitoring s Logging Implement AWS security best practices (IAM, VPC,Security Groups, Access Controls) . Monitor infrastructure using Prometheus, Grafana, CloudWatch, or ELK Stack . Set up backup and disaster recovery strategies for databases, storage, and models . Database s Storage Management Maintain and optimize MySQL (RDS) and MongoDB (DynamoDB) databases . Handle structured (RDS) and unstructured (S3, DynamoDB) AI data storage . Improve data synchronization between AI models, applications, and web services . On-Prem s Hybrid Cloud Integration (Optional) Manage on-prem AI workloads with GPU acceleration . Optimize AI workloads across cloud and edge devices . Required Skills and Qualifications 3 to 5 years of experience in DevOps, Cloud Infrastructure, or AI/ML Ops . Expertise in AWS (SageMaker, EC2, Lambda, RDS, DynamoDB, S3) . Experience with Docker s Kubernetes (EKS) for container orchestration. Proficiency in CI/CD tools (Jenkins, GitLab CI/CD, AWS CodeBuild) . Strong scripting skills in Bash, Python, or PowerShell . Knowledge of Linux ecosystem (Ubuntu, RHEL, CentOS) . Hands-on experience with ML model deployment (MLflow, TensorFlow Serving, Triton, BentoML) . Strong understanding of networking, security, and monitoring . Experience with database management (MySQL, PostgreSQL, MongoDB) . Preferred Skills AWS Certified DevOps Engineer, CKA (Kubernetes), or Terraform certification . Experience with hybrid cloud (AWS + on-prem GPU servers) . Knowledge of edge AI deployment and real-time AI inference optimization . Interested , Please share your resume to priyadharshini.sridhar@merillife.com Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Responsibilities Manage Data: Extract, clean, and structure both structured and unstructured data. Coordinate Pipelines: Utilize tools such as Airflow, Step Functions, or Azure Data Factory to orchestrate data workflows. Deploy Models: Develop, fine-tune, and deploy models using platforms like SageMaker, Azure ML, or Vertex AI. Scale Solutions: Leverage Spark or Databricks to handle large-scale data processing tasks. Automate Processes: Implement automation using tools like Docker, Kubernetes, CI/CD pipelines, MLFlow, Seldon, and Kubeflow. Collaborate Effectively: Work alongside engineers, architects, and business stakeholders to address and resolve real-world problems efficiently. Qualifications 3+ years of hands-on experience in MLOps (4-5 years of overall software development experience). Extensive experience with at least one major cloud provider (AWS, Azure, or GCP). Proficiency in using Databricks, Spark, Python, SQL, TensorFlow, PyTorch, and Scikit-learn. Expertise in debugging Kubernetes and creating efficient Dockerfiles. Experience in prototyping with open-source tools and scaling solutions effectively. Strong analytical skills, humility, and a proactive approach to problem-solving. Preferred Qualifications Experience with SageMaker, Azure ML, or Vertex AI in a production environment. Commitment to writing clean code, creating clear documentation, and maintaining concise pull requests. Skills: sql,kubeflow,spark,docker,databricks,ml,gcp,mlflow,kubernetes,aws,pytorch,azure,ci/cd,tensorflow,scikit-learn,seldon,python,mlops Show more Show less

Posted 1 month ago

Apply

5.0 years

4 - 7 Lacs

Thiruvananthapuram

On-site

Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. What we are looking from an ideal candidate? Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes . Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Preferred Skills: What skills do you need? 5+ years of experience in DevOps, MLOps , or infrastructure engineering roles. Hands-on experience with cloud platforms ( AWS, GCP, or Azure ) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker , Kubernetes , and infrastructure-as-code frameworks. Experience with ML pipelines , model versioning, and ML monitoring tools. Scripting skills in Python , Bash , or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , DVC , or Triton Inference Server . Exposure to data versioning , feature stores , and model registries . Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 10 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. AWS Data Engineer- Senior We are seeking a highly skilled and motivated Hands on AWS Data Engineer with 5-10 years of experience in AWS Glue, Pyspark ,AWS Redshift, S3, and Python to join our dynamic team. As a Data Engineer, you will be responsible for designing, developing, and optimizing data pipelines and solutions that support business intelligence, analytics, and large-scale data processing. You will work closely with data scientists, analysts, and other engineering teams to ensure seamless data flow across our systems. Technical Skills : Must have Strong experience in AWS Data Services like Glue , Lambda, Even bridge, Kinesis, S3/ EMR , Redshift , RDS, Step functions, Airflow & Pyspark Strong exposure to IAM, Cloud Trail , Cluster optimization , Python & SQL Should have expertise in Data design, STTM, understanding of Data models , Data component design, Automated testing, Code Coverage, UAT support , Deployment and go live Experience with version control systems like SVN, Git. Create and manage AWS Glue crawlers and jobs to automate data cataloging and ingestion processes across various structured and unstructured data sources. Strong experience with AWS Glue building ETL pipelines, managing crawlers, and working with Glue data catalogue. Proficiency in AWS Redshift designing and managing Redshift clusters, writing complex SQL queries, and optimizing query performance. Enable data consumption from reporting and analytics business applications using AWS services (ex: QuickSight, Sagemaker, JDBC / ODBC connectivity, etc.) Behavioural skills: Willing to work 5 days a week from ODC / client location ( based on project can be hybrid 3 days a week ) Ability to Lead developers and engage with client stakeholders to drive technical decisions Ability to do technical design and POCs- help build / analyse logical data model, required entities, relationships, data constraints and dependencies focused on enabling reporting and analytics business use cases Should be able to work in Agile environment Should have strong communication skills Good to have : Exposure to Financial Services , Wealth and Asset Management Exposure to Data science, Exposure to Fullstack technologies GenAI will be an added advantage EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 month ago

Apply

5.0 years

3 - 5 Lacs

Hyderābād

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 1 month ago

Apply

150.0 years

5 - 7 Lacs

Gurgaon

On-site

You are as unique as your background, experience and point of view. Here, you’ll be encouraged, empowered and challenged to be your best self. You'll work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you'll have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families and communities around the world. Job Description: Principal Consultant -DevOps Are you ready to shine? At Sun Life, we empower you to be your most brilliant self. Who we are? Sun Life is a leading financial services company with history of 150+ years that helps our clients achieve lifetime financial security and live healthier lives. We serve millions in Canada, the U.S., Asia, the U.K., and other parts of the world. We have a network of Sun Life advisors, third-party partners, and other distributors. Through them, we’re helping set our clients free to live their lives their way, from now through retirement. We’re working hard to support their wellness and health management goals, too. That way, they can enjoy what matters most to them. And that’s anything from running a marathon to helping their grandchildren learn to ride a bike. To do this, we offer a broad range of protection and wealth products and services to individuals, businesses, and institutions, including: Insurance. Life, health, wellness, disability, critical illness, stop-loss, and long-term care insurance. Investments. Mutual funds, segregated funds, annuities, and guaranteed investment products Advice. Financial planning and retirement planning services Asset management. Pooled funds, institutional portfolios, and pension funds With innovative technology, a strong distribution network and long-standing relationships with some of the world’s largest employers, we are today providing financial security to millions of people globally. Sun Life is a leading financial services company that helps our clients achieve lifetime financial security and live healthier lives, with strong insurance, asset management, investments, and financial advice portfolios. At Sun Life, our asset management business draws on the talent and experience of professionals from around the globe. Sun Life Global Solutions (SLGS) Established in the Philippines in 1991 and in India in 2006, Sun Life Global Solutions, (formerly Asia Service Centres), a microcosm of Sun Life, is poised to harness the regions’ potential in a significant way - from India and the Philippines to the world. We are architecting and executing a BOLDER vision: being a Digital and Innovation Hub, shaping the Business, driving Transformation and superior Client experience by providing expert Technology, Business and Knowledge Services and advanced Solutions. We help our clients achieve lifetime financial security and live healthier lives – our core purpose and mission. Drawing on our collaborative and inclusive culture, we are reckoned as a ‘Great Place to Work’, ‘Top 100 Best Places to Work for Women’ and stand among the ‘Top 11 Global Business Services Companies’ across India and the Philippines. The technology function at Sun Life Global Solutions is geared towards growing our existing business, deepening our client understanding, managing new-age technology systems, and demonstrating thought leadership. We are committed to building greater domain expertise and engineering ability, delivering end to end solutions for our clients, and taking a lead in intelligent automation. Tech services at Sun Life Global Solutions have evolved in areas such as application development and management, Support, Testing, Digital, Data Engineering and Analytics, Infrastructure Services and Project Management. We are constantly expanding our strength in Information technology and are looking for fresh talents who can bring ideas and values aligning with our Digital strategy. Our Client Impact strategy is motivated by the need to create an inclusive culture, empowered by highly engaged people. We are entering a new world that focuses on doing purpose driven work. The kind that fills your day with excitement and determination, because when you love what you do, it never feels like work. We want to create an environment where you feel empowered to act and are surrounded by people who challenge you, support you and inspire you to become the best version of yourself. As an employer, we not only want to attract top talent, but we want you to have the best Sun Life Experience. We strive to Shine Together, Make Life Brighter & Shape the Future! What will you do? You will help implement automation, security, and speed of delivery solutions across Sun Life and act as a change agent for the adoption of a DevOps mindset. You will coach and mentor teams, IT leaders and business leaders and create and maintain ongoing learning journeys. You will play a critical role in supporting and guiding DevOps Engineers and technical leaders to ensure that DevOps practices are employed globally. You will act as a role model by demonstrating the right mindset including a test and learn attitude, a bias for action, a passion to innovate and a willingness to learn. You will lead a team of highly skilled and collaborative individuals and will lead new hire on-boarding, talent development, retention, and succession planning. Our engineering career framework helps our engineers to understand the scope, collaborative reach, and levers for impact at every job role and defines the key behaviors and deliverables specific to one’s role and team and plan their career with Sun Life. Your scope of work / key responsibilities: Analyze, investigate, and recommend solutions for continuous improvements, process enhancements, identify pain points, and more efficient workflows. Create templates, standards, and models to facilitate future implementations and adjust priorities when necessary. Demonstrate that you are a collaborative communicator with architects, designers, business system analysts, application analysts, operation teams and testing specialists to deliver fully automated ALM systems. Confidently speaking up, bringing people together, facilitating meetings, recording minutes and actions, and rallying the team towards a common goal Deploy, configure, manage, and perform ongoing maintenance of technical infrastructure including all DevOps tooling used by our Canadian IT squads Set-up and maintain fully automated CI/CD pipeline for multiple Java / .NET environments using tools like Bitbucket, Jenkins, Ansible, Docker etc. Guide development teams with the preparation of releases for production. This may include assisting in the automation of performance tests, validation of infrastructure requirements, and guiding the team with respect to system decisions Create or improve the automated deployment processes, techniques, and tools Troubleshoot and resolve technical operational issues related to IT Infrastructure Review and analyze organizational needs and goals to determine future impacts to applications and systems Ensure information security standards and requirements are incorporated into all solutions Stay current with trends in emerging technologies and how they could apply to Sun Life Key Qualifications and experience: 10+ years of continuous Integration and delivery (CI/CD) experience in a systems development life cycle environment using Bitbucket, Jenkins, CDD, etc. Self sufficient and experienced with either modern programming languages (e.g. Java or C#), or scripting languages such as SageMaker Python, YAML or similar. Working knowledge of SQL, Tableau, Grafana. Advanced knowledge of DevOps with a security and automation mindset Knowledge of using and configuring build tools and orchestration such as Jenkins, SonarQube, Checkmarx, Snyk, Artifactory, Azure DevOps, Docker, Kubernetes, OpenShift, Ansible, Continuous Delivery Director (CDD) Advanced knowledge of deployment (i.e. Ansible, Chef) and containerization (Docker/Kubernetes) tooling IAAS/PAAS/SAAS deployment and operations experience Experience with native mobile development on iOS and/or Android is an asset Experience with source code management tools such as Bitbucket, Git, TFS Technical Credentials: Java/Python , Jenkins , Ansible , Kubernetes ..so on Primary Location: Gurugram/ Bengaluru Schedule: 12:00 PM to 8:30 PM Job Category: IT - Application Development Posting End Date: 29/06/2025

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

OPENTEXT OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. Your Impact We are seeking a skilled and experienced Software Engineer with expertise in Large Language Models (LLM), Java, Python, Kubernetes, Helm and cloud technologies like AWS. The ideal candidate will contribute to designing, developing, and maintaining scalable software solutions using microservices architecture. This role offers an exciting opportunity to work with cutting-edge technologies in a collaborative environment. What The Role Offers Design, develop, troubleshoot and debug software programs for software enhancements and new products. Integrate Large Language Models (LLMs) into business applications to enhance functionality and user experience. Develop and maintain transformer-based models. Develop RESTful APIs and ensure seamless integration across services. Collaborate with cross-functional teams to gather requirements and translate them into technical solutions. Implement best practices for cloud-native development using AWS services like EC2, Lambda, SageMaker, S3 etc. Deploy, manage, and scale containerized applications using Kubernetes (K8S) and Helm. Designs enhancements, updates, and programming changes for portions and subsystems of application software, utilities, databases, and Internet-related tools. Analyses design and determines coding, programming, and integration activities required based on general objectives and knowledge of overall architecture of product or solution. Collaborates and communicates with management, internal, and outsourced development partners regarding software systems design status, project progress, and issue resolution. Represents the software systems engineering team for all phases of larger and more-complex development projects. Ensure system reliability, security, and performance through effective monitoring and troubleshooting. Write clean, efficient, and maintainable code following industry standards. Participate in code reviews, mentorship, and knowledge-sharing within the team. What You Need To Succeed Bachelor's or Master's degree in Computer Science, Information Systems, or equivalent. Typically, 3-5 years of experience Strong understanding of Large Language Models (LLM) and experience applying them in real-world applications. Expertise in Elastic Search or similar search and indexing technologies. Expertise in designing and implementing microservices architecture. Solid experience with AWS services like EC2, VPC, ECR, EKS, SageMaker etc. for cloud deployment and management. Proficiency in container orchestration tools such as Kubernetes (K8S) and packaging/deployment tools like Helm. Strong problem-solving skills and the ability to troubleshoot complex issues. Strong experience in Java and Python development, with proficiency in frameworks like Spring Boot or Java EE. Should have good hands-on experience in designing and writing modular object-oriented code. Good knowledge of REST APIs, Spring, Spring boot, Hibernate. Excellent analytical, troubleshooting and problem-solving skills. Ability to demonstrate effective teamwork both within the immediate team and across teams. Experience in working with version control and build tools like GIT, GitLab, Maven and Jenkins, GitLab CI. Excellent communication and collaboration skills. Familiarity with Python for LLM-related tasks. Working knowledge in RAG Experience working with NLP frameworks such as Hugging Face, OpenAI, or similar. Knowledge of database systems like PostgreSQL, MongoDB, or DynamoDB. Experience with observability tools like Prometheus, Grafana, or ELK Stack. Experience in working with event-driven architectures and messaging systems (e.g., Kafka, RabbitMQ). Experience with CI/CD pipelines, DevOps practices, and infrastructure as code (e.g., Terraform, CloudFormation). Familiar with Agile framework/SCRUM development methodologies One Last Thing OpenText is more than just a corporation, it's a global community where trust is foundational, the bar is raised, and outcomes are owned. Join us on our mission to drive positive change through privacy, technology, and collaboration. At OpenText, we don't just have a culture; we have character. Choose us because you want to be part of a company that embraces innovation and empowers its employees to make a difference. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace. 46999 Show more Show less

Posted 1 month ago

Apply

0 years

4 - 7 Lacs

Pune

Remote

Infrastructure Engineering As an Infrastructure Engineer with Convera , c looking for motivated and experienced Voice Engineers and professional who are eager to expand their expertise into the dynamic world of Amazon Connect—a cutting-edge, cloud-based contact center solution that offers complete customization with scalable cloud technology. If you're looking to advance your career in software development, AWS, or AI, this is the perfect opportunity to upskill and work on innovative solutions. You will be responsible for: As a Voice Engineer, you will: Implement and optimize Amazon Connect cloud-based contact center solutions, including call and queue flows, agent experience, call recording, metrics analysis, Contact Lens, and CTR data insights. Act as a consultative technology expert, guiding the planning, design, implementation, and maintenance of Amazon Connect architecture. Develop seamless interconnectivity between Amazon Connect services and related applications. Build and integrate applications using AWS services, such as CloudWatch, Kinesis, S3, Lex, and Polly. Design robust software solutions, algorithms, and cloud architectures tailored to product requirements. Participate in all phases of the software development lifecycle, from requirement analysis and technical design to prototyping, coding, testing, deployment, and support. Collaborate with Scrum Masters, QA teams, and developers to ensure agile delivery of projects. Troubleshoot and resolve performance issues and software bugs efficiently. Minimum Qualifications: Expertise in AWS Connect, Amazon Lex, Lambda Integration, S3, DynamoDB, CloudWatch, CloudFormation, IAM, CloudFront, JavaScript, Node.js, and Python (Amazon Connect / Amazon Lex experience is mandatory). Strong background in technical architecture, design, and implementation of Amazon Connect. Hands-on experience with telephony systems, VoIP technologies, and UCaaS solutions like Zoom Phone. Familiarity with contact center technologies, IVR solutions, and automation strategies. Proficiency in modern DevOps tools and techniques, including GitHub, CI/CD pipelines. Knowledge of object-oriented programming languages (Java, C#, C++, Python, Ruby). Experience working with SQL databases and fundamental database concepts. Understanding of AI/ML cloud services such as Amazon SageMaker, Bedrock, and Amazon Queue. Bachelor’s degree in Computer Science or a related field. Strong analytical, problem-solving, and communication skills. Ability to collaborate effectively with globally distributed teams. Preferred Qualifications: Experience working in an Agile DevOps environment. Knowledge of automated provisioning & maintenance in cloud environments. Innovative, self-motivated, and results-driven approach. Ability to thrive under pressure and meet tight deadlines. Location Remote, India(WFH) About Convera Convera is the largest non-bank B2B cross-border payments company in the world. Formerly Western Union Business Solutions, we leverage decades of industry expertise and technology-led payment solutions to deliver smarter money movements to our customers – helping them capture more value with every transaction. Convera serves more than 30,000 customers ranging from small business owners to enterprise treasurers to educational institutions to financial institutions to law firms to NGOs. Our teams care deeply about the value we bring to our customers which makes Convera a rewarding place to work. This is an exciting time for our organization as we build our team with growth-minded, results-oriented people who are looking to move fast in an innovative environment. As a truly global company with employees in over 20 countries, we are passionate about diversity; we seek and celebrate people from different backgrounds, lifestyles, and unique points of view. We want to work with the best people and ensure we foster a aculture of inclusion and belonging. We offer an abundance of competitive perks and benefits including: Competitive salary Opportunity to earn an annual bonus. Great career growth and development opportunities in a global organization A flexible approach to work There are plenty of amazing opportunities at Convera for talented, creative problem solvers who never settle for good enough and are looking to transform Business to Business payments. #LI-KP1

Posted 1 month ago

Apply

2.0 - 3.0 years

4 - 6 Lacs

Bengaluru

On-site

Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Title: AI/ML Engineer Company : Cyfuture India Pvt. Ltd. Industry : IT Services and IT Consulting Location : Sector 81, NSEZ, Noida (5 Days Work From Office) Website : www.cyfuture.com About Cyfuture Cyfuture is a trusted name in IT services and cloud infrastructure, offering state-of-the-art data center solutions and managed services across platforms like AWS, Azure, and VMWare. We are expanding rapidly in system integration and managed services, building strong alliances with global OEMs like VMWare, AWS, Azure, HP, Dell, Lenovo, and Palo Alto. Position Overview We are hiring an experienced AI/ML Engineer to lead and shape our AI/ML initiatives. The ideal candidate will have hands-on experience in machine learning and artificial intelligence, with strong leadership capabilities and a passion for delivering production-ready solutions. This role involves end-to-end ownership of AI/ML projects, from strategy development to deployment and optimization of large-scale systems. Key Responsibilities Lead and mentor a high-performing AI/ML team. Design and execute AI/ML strategies aligned with business goals. Collaborate with product and engineering teams to identify impactful AI opportunities. Build, train, fine-tune, and deploy ML models in production environments. Manage operations of LLMs and other AI models using modern cloud and MLOps tools. Implement scalable and automated ML pipelines (e.g., with Kubeflow or MLRun). Handle containerization and orchestration using Docker and Kubernetes. Optimize GPU/TPU resources for training and inference tasks. Develop efficient RAG pipelines with low latency and high retrieval accuracy. Automate CI/CD workflows for continuous integration and delivery of ML systems. Key Skills & Expertise 1. Cloud Computing & Deployment Proficiency in AWS, Google Cloud, or Azure for scalable model deployment. Familiarity with cloud-native services like AWS SageMaker, Google Vertex AI, or Azure ML. Expertise in Docker and Kubernetes for containerized deployments Experience with Infrastructure as Code (IaC) using tools like Terraform or CloudFormation. 2. Machine Learning & Deep Learning Strong command of frameworks: TensorFlow, PyTorch, Scikit-learn, XGBoost. Experience with MLOps tools for integration, monitoring, and automation. Expertise in pre-trained models, transfer learning, and designing custom architectures. 3. Programming & Software Engineering Strong skills in Python (NumPy, Pandas, Matplotlib, SciPy) for ML development. Backend/API development with FastAPI, Flask, or Django. Database handling with SQL and NoSQL (PostgreSQL, MongoDB, BigQuery). Familiarity with CI/CD pipelines (GitHub Actions, Jenkins). 4. Scalable AI Systems Proven ability to build AI-driven applications at scale. Handle large datasets, high-throughput requests, and real-time inference. Knowledge of distributed computing: Apache Spark, Dask, Ray. 5. Model Monitoring & Optimization Hands-on with model compression, quantization, and pruning. A/B testing and performance tracking in production. Knowledge of model retraining pipelines for continuous learning. 6. Resource Optimization Efficient use of compute resources: GPUs, TPUs, CPUs. Experience with serverless architectures to reduce cost. Auto-scaling and load balancing for high-traffic systems. 7. Problem-Solving & Collaboration Translate complex ML models into user-friendly applications. Work effectively with data scientists, engineers, and product teams. Write clear technical documentation and architecture reports. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies