Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 3.0 years
4 - 8 Lacs
Hyderabad
Work from Office
What you will do In this vital role you will responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives, and visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes. Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing. Be a key team member that assists in the design and development of the data pipeline. Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems. Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks. Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs. Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency. Implement data security and privacy measures to protect sensitive data. Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions. Collaborate and communicate effectively with product teams. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications and Experience Masters degree and 1 to 3 years of experience in Computer Science, IT, or related field OR Bachelors degree and 3 to 5 years of experience in Computer Science, IT, or related field OR Diploma and 7 to 9 years of experience in Computer Science, IT, or related field Must-Have Skills: Hands-on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing. Proficiency in data analysis tools (e.g., SQL) and experience with data visualization tools. Excellent problem-solving skills and the ability to work with large, complex datasets. Preferred Qualifications: Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development. Strong understanding of data modeling, data warehousing, and data integration concepts. Knowledge of Python/R, Databricks, SageMaker, cloud data platforms. Professional Certifications: Certified Data Engineer / Data Analyst (preferred on Databricks or cloud environments). Certified Data Scientist (preferred on Databricks or Cloud environments). Machine Learning Certification (preferred on Databricks or Cloud environments). Soft Skills: Excellent critical-thinking and problem-solving skills. Strong communication and collaboration skills. Demonstrated awareness of how to function in a team setting. Demonstrated presentation skills.
Posted 6 days ago
0 years
0 Lacs
India
Remote
AI Opportunities with Soul AI’s Expert Community! Are you an MLOps Engineer ready to take your expertise to the next level? Soul AI (by Deccan AI) is building an elite network of AI professionals, connecting top-tier talent with cutting-edge projects. Why Join? - Above market-standard compensation - Contract-based or freelance opportunities (2–12 months) - Work with industry leaders solving real AI challenges - Flexible work locations – Remote | Onsite | Hyderabad/Bangalore Your Role: - Architect and optimize ML infrastructure with Kubeflow, MLflow, SageMaker Pipelines - Build CI/CD pipelines (GitHub Actions, Jenkins, GitLab CI/CD) - Automate ML workflows (feature engineering, retraining, deployment) - Scale ML models with Docker, Kubernetes, Airflow - Ensure model observability, security, and cost optimization in cloud (AWS/GCP/Azure) Must-Have Skills: 1. Proficiency in Python, TensorFlow, PyTorch, CI/CD pipelines 2. Hands-on experience with cloud ML platforms (AWS SageMaker, GCP Vertex AI, Azure ML) 3. Expertise in monitoring tools (MLflow, Prometheus, Grafana) 4. Knowledge of distributed data processing (Spark, Kafka) ( Bonus: Experience in A/B testing, canary deployments, serverless ML) Next Steps: 1. Register on Soul AI’s website 2. Get shortlisted & complete screening rounds 3. Join our Expert Community and get matched with top AI projects Don’t just find a job. Build your future in AI with Soul AI! Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. ML Ops Engineer (Senior Consultant) Key Responsibilities: Lead the design, implementation, and maintenance of scalable ML infrastructure. Collaborate with data scientists to deploy, monitor, and optimize machine learning models. Automate complex data processing workflows and ensure data quality. Optimize and manage cloud resources for cost-effective operations. Develop and maintain robust CI/CD pipelines for ML models. Troubleshoot and resolve advanced issues related to ML infrastructure and deployments. Mentor and guide junior team members, fostering a culture of continuous learning. Work closely with cross-functional teams to understand requirements and deliver innovative solutions. Drive best practices and standards for ML Ops within the organization. Required Skills and Experience: Minimum 5 years of experience in infrastructure engineering. Proficiency in using EMR (Elastic MapReduce) for large-scale data processing. Extensive experience with SageMaker, ECR, S3, Lamba functions, Cloud capabilities and deployment of ML models. Strong proficiency in Python scripting and other programming languages. Experience with CI/CD tools and practices. Solid understanding of the machine learning lifecycle and best practices. Strong problem-solving skills and attention to detail. Excellent communication skills and ability to work collaboratively in a team environment. Demonstrated ability to take ownership and drive projects to completion. Proven experience in leading and mentoring teams. Beneficial Skills and Experience: Experience with containerization and orchestration tools (Docker, Kubernetes). Familiarity with data visualization tools and techniques. Knowledge of big data technologies (Spark, Hadoop). Experience with version control systems (Git). Understanding of data governance and security best practices. Experience with monitoring and logging tools (Prometheus, Grafana). Stakeholder management skills and ability to communicate technical concepts to non-technical audiences. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
5.0 years
6 - 8 Lacs
Hyderābād
Remote
Your opportunity As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. What you'll do In this role, you'll have the opportunity to contribute significantly to our machine learning initiatives, shaping the future of AI-driven solutions in various domains. If you're passionate about pushing the boundaries of what's possible in machine learning and ready to take on diverse challenges, we encourage you to apply and join us in our journey towards innovation. This role requires Proficiency in software engineering design practices. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, Tensorflow, HuggingFace Transformers, and SpaCy. 5+ years of developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. Bonus points if you have Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains Please note that visa sponsorship is not available for this position. Fostering a diverse, welcoming and inclusive environment is important to us. We work hard to make everyone feel comfortable bringing their best, most authentic selves to work every day. We celebrate our talented Relics’ different backgrounds and abilities, and recognize the different paths they took to reach us – including nontraditional ones. Their experiences and perspectives inspire us to make our products and company the best they can be. We’re looking for people who feel connected to our mission and values, not just candidates who check off all the boxes. If you require a reasonable accommodation to complete any part of the application or recruiting process, please reach out to resume@newrelic.com. We believe in empowering all Relics to achieve professional and business success through a flexible workforce model. This model allows us to work in a variety of workplaces that best support our success, including fully office-based, fully remote, or hybrid. Our hiring process In compliance with applicable law, all persons hired will be required to verify identity and eligibility to work and to complete employment eligibility verification. Note: Our stewardship of the data of thousands of customers’ means that a criminal background check is required to join New Relic. We will consider qualified applicants with arrest and conviction records based on individual circumstances and in accordance with applicable law including, but not limited to, the San Francisco Fair Chance Ordinance. Headhunters and recruitment agencies may not submit resumes/CVs through this website or directly to managers. New Relic does not accept unsolicited headhunter and agency resumes, and will not pay fees to any third-party agency or company that does not have a signed agreement with New Relic. Candidates are evaluated based on qualifications, regardless of race, religion, ethnicity, national origin, sex, sexual orientation, gender expression or identity, age, disability, neurodiversity, veteran or marital status, political viewpoint, or other legally protected characteristics. Review our Applicant Privacy Notice at https://newrelic.com/termsandconditions/applicant-privacy-policy
Posted 6 days ago
4.0 years
11 Lacs
Mohali
On-site
Skill Sets: Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow) Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models Strong experience in NLP, fine-tuning transformer models, and dataset preparation Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI) Experience in containerization (Docker, Kubernetes) and CI/CD pipelines Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning) Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection Roles and Responsibilities: Design and implement end-to-end ML pipelines from data ingestion to production Develop, fine-tune, and optimize ML models, ensuring high performance and scalability Compare and evaluate models using key metrics (F1-score, AUC-ROC, BLEU etc) Automate model retraining, monitoring, and drift detection Collaborate with engineering teams for seamless ML integration Mentor junior team members and enforce best practices Job Type: Full-time Pay: Up to ₹1,100,000.00 per year Schedule: Day shift Monday to Friday Application Question(s): How soon can you join us Experience: Total: 4 years (Required) Data Science roles: 3 years (Required) Work Location: In person
Posted 6 days ago
2.0 years
5 - 9 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Strong experience in Python 2+ years’ experience of working on feature/data pipelines using PySpark Understanding and experience around data science Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice Experience with statistical models e.g., multinomial logistic regression Experience of technical architecture, design, deployment, and operational level knowledge Exploratory Data Analysis Knowledge around Model building, Hyperparameter tuning and Model performance metrics. Statistics Knowledge (Probability Distributions, Hypothesis Testing) Time series modelling, Forecasting, Image/Video Analytics, and Natural Language Processing (NLP). Good To Have: Experience researching and ing large language and Generative AI models. Experience with LangChain, LLAMAIndex, Foundation model tuning, Data Augmentation, and Performance Evaluation frameworks Able to provide analytical expertise in the process of model development, refining, and implementation in a variety of analytics problems. Knowledge on Docker and Kubernetes. Skills Required : Machine Learning, Natural Language Processing , AWS Sagemaker, Python Role : Generate actionable insights for business improvements. Ability to understand business requirements. Write clean, efficient, and reusable code following best practices. Troubleshoot and debug applications to ensure optimal performance. Write unit test cases Collaborate with cross-functional teams to define and deliver new features Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Experience ing theoretical models in an applied environment. MLOps, Data Pipeline, Data engineering Statistics Knowledge (Probability Distributions, Hypothesis Testing) Experience : 4 to 5 years Job Reference Number : 13027
Posted 6 days ago
2.0 - 4.0 years
2 - 8 Lacs
Noida
On-site
Noida, Uttar Pradesh, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : 2-4 years of experience in designing, developing, and training machine learning models using diverse algorithms and techniques, including deep learning, NLP, computer vision, and time series analysis. Proven ability to optimize model performance through experimentation with architectures, hyperparameter tuning, and evaluation metrics. Hands-on experience in processing large datasets, including preprocessing, feature engineering, and data augmentation. Demonstrated ability to deploy trained AI/ML models to production using frameworks like Kubernetes and cloud-based ML platforms Solid understanding of monitoring and logging for performance tracking. Experience in exploring new AI/ML methodologies and documenting the development and deployment lifecycle, including performance metrics. Familiarity with AWS services, particularly SageMaker, is expected. Excellent communication, presentation, and interpersonal skills are essential. Good to have: Knowledge of GenAI (LangChain, Foundation model tuning, and GPT3) Amazon AWS Certified Machine Learning - Specialty certifications Skills Required : Machine Learning, Langchain, AWS Sagemaker, Python Role : Explore different models and transform data science prototypes for given problem Analyze dataset perform data enrichment, feature engineering and model training Abale to write code using Python, Pandas and Dataframe APIs Develop machine learning applications according to requirements Perform statistical analysis and fine-tuning using test results Collaborate with data engineers & architects to implement and deploy scalable solutions. Encourage continuous innovation and out-of-the-box thinking. Experience ing theoretical models in an applied environment. Experience : 1 to 3 years Job Reference Number : 13047
Posted 6 days ago
16.0 years
1 - 6 Lacs
Noida
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 6 days ago
12.0 years
5 - 6 Lacs
Indore
On-site
Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895
Posted 6 days ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Responsibilities: Development of AI/ML models and workflow to apply advanced algorithms and machine learning Enable team to run an automated design engine Creates design standards and assurance processes for easily deployable and scalable models. Ensure successful developments: Be a technical leader through strong example and training of more junior engineers, documenting all relevant product and design information to educate others on novel design techniques and provide guidance on product usage CI/CD Pipeline (Azure Devops/Git) integration as Code repository. Minimum Qualifications (Experience And Skills) 5+ years of Data science experience A strong software engineering background with emphasis on C/C++ or Python 1+ years of experience in AWS Sagemaker Services Exposure to AWS lambda ,API Gateway, AWs Amplify & AWS Serverless , AWS Cognotio, AWS Security Experience in debugging complex issues with a focus on object-oriented software design and development Experience with optimization techniques and algorithms Experience developing artificial neural networks and deep neural networks Previous experience working in an Agile environment, and collaborating with multi-disciplinary teams Ability to communicate and document design work with clarity and completeness Previous experience working on machine learning projects. Team player with a strong sense of urgency to meet product requirements with punctuality and professionalism Preferred Qualifications Programming Experience in Perl / Python / R / Matlab / Shell scripting Knowledge of neural networks, with hands-on experience using ML frameworks such as TensorFlow or PyTorch Knowledge of Convolutional Neural Networks (CNNs), RNN/LSTMs Knowledge of data management fundamentals and data storage principles Knowledge of distributed systems as it pertains to data storage and computing Knowledge of reinforcement learning techniques Knowledge of evolutionary algorithms AWS Certification Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Kochi, Kerala, India
On-site
Role Description Roles and Responsibilities: Architecture & Infrastructure Design Architect scalable, resilient, and secure AI/ML infrastructure on AWS using services like EC2, SageMaker, Bedrock, VPC, RDS, DynamoDB, CloudWatch. Develop Infrastructure as Code (IaC) using Terraform, and automate deployments with CI/CD pipelines. Optimize cost and performance of cloud resources used for AI workloads. AI Project Leadership Translate business objectives into actionable AI strategies and solutions. Oversee the entire AI lifecycle—from data ingestion, model training, and evaluation to deployment and monitoring. Drive roadmap planning, delivery timelines, and project success metrics. Model Development & Deployment Lead selection and development of AI/ML models, particularly for NLP, GenAI, and AIOps use cases. Implement frameworks for bias detection, explainability, and responsible AI. Enhance model performance through tuning and efficient resource utilization. Security & Compliance Ensure data privacy, security best practices, and compliance with IAM policies, encryption standards, and regulatory frameworks. Perform regular audits and vulnerability assessments to ensure system integrity. Team Leadership & Collaboration Lead and mentor a team of cloud engineers, ML practitioners, software developers, and data analysts. Promote cross-functional collaboration with business and technical stakeholders. Conduct technical reviews and ensure delivery of production-grade solutions. Monitoring & Maintenance Establish robust model monitoring, ing, and feedback loops to detect drift and maintain model reliability. Ensure ongoing optimization of infrastructure and ML pipelines. Must-Have Skills 10+ years of experience in IT with 4+ years in AI/ML leadership roles. Strong hands-on experience in AWS services: EC2, SageMaker, Bedrock, RDS, VPC, DynamoDB, CloudWatch. Expertise in Python for ML development and automation. Solid understanding of Terraform, Docker, Git, and CI/CD pipelines. Proven track record in delivering AI/ML projects into production environments. Deep understanding of MLOps, model versioning, monitoring, and retraining pipelines. Experience in implementing Responsible AI practices – including fairness, explainability, and bias mitigation. Knowledge of cloud security best practices and IAM role configuration. Excellent leadership, communication, and stakeholder management skills. Good-to-Have Skills AWS Certifications such as AWS Certified Machine Learning – Specialty or AWS Certified Solutions Architect. Familiarity with data privacy laws and frameworks (GDPR, HIPAA). Experience with AI governance and ethical AI frameworks. Expertise in cost optimization and performance tuning for AI on the cloud. Exposure to LangChain, LLMs, Kubeflow, or GCP-based AI services. Skills Enterprise Architecture,Enterprise Architect,Aws,Python Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Us: Athena is India's largest institution in the "premium undergraduate study abroad" space. Founded 10 years ago by two Princeton graduates, Poshak Agrawal and Rahul Subramaniam, Athena is headquartered in Gurgaon, with offices in Mumbai and Bangalore, and caters to students from 26 countries. Athena’s vision is to help students become the best version of themselves. Athena’s transformative, holistic life coaching program embraces both depth and breadth, sciences and the humanities. Athena encourages students to deepen their theoretical knowledge and apply it to address practical issues confronting society, both locally and globally. Through our flagship program, our students have gotten into various, universities including Harvard University, Princeton University, Yale University, Stanford University, University of Cambridge, MIT, Brown, Cornell University, University of Pennsylvania, University of Chicago, among others. Learn more about Athena: https://www.athenaeducation.co.in/article.aspx Role Overview We are looking for an AI/ML Engineer who can mentor high-potential scholars in creating impactful technology projects. This role requires a blend of strong engineering expertise, the ability to distill complex topics into digestible concepts, and a deep passion for student-driven innovation. You’ll help scholars explore the frontiers of AI—from machine learning models to generative AI systems—while coaching them in best practices and applied engineering. Key Responsibilities: Guide scholars through the full AI/ML development cycle—from problem definition, data exploration, and model selection to evaluation and deployment. Teach and assist in building: Supervised and unsupervised machine learning models. Deep learning networks (CNNs, RNNs, Transformers). NLP tasks such as classification, summarization, and Q&A systems. Provide mentorship in Prompt Engineering: Craft optimized prompts for generative models like GPT-4 and Claude. Teach the principles of few-shot, zero-shot, and chain-of-thought prompting. Experiment with fine-tuning and embeddings in LLM applications. Support scholars with real-world datasets (e.g., Kaggle, open data repositories) and help integrate APIs, automation tools, or ML Ops workflows. Conduct internal training and code reviews, ensuring technical rigor in projects. Stay updated with the latest research, frameworks, and tools in the AI ecosystem. Technical Requirements: Proficiency in Python and ML libraries: scikit-learn, XGBoost, Pandas, NumPy. Experience with deep learning frameworks : TensorFlow, PyTorch, Keras. Strong command of machine learning theory , including: Bias-variance tradeoff, regularization, and model tuning. Cross-validation, hyperparameter optimization, and ensemble techniques. Solid understanding of data processing pipelines , data wrangling, and visualization (Matplotlib, Seaborn, Plotly). Advanced AI & NLP Experience with transformer architectures (e.g., BERT, GPT, T5, LLaMA). Hands-on with LLM APIs : OpenAI (ChatGPT), Anthropic, Cohere, Hugging Face. Understanding of embedding-based retrieval , vector databases (e.g., Pinecone, FAISS), and Retrieval-Augmented Generation (RAG). Familiarity with AutoML tools , MLflow, Weights & Biases, and cloud AI platforms (AWS SageMaker, Google Vertex AI). Prompt Engineering & GenAI Proficiency in crafting effective prompts using: Instruction tuning Role-playing and system prompts Prompt chaining tools like LangChain or LlamaIndex Understanding of AI safety , bias mitigation, and interpretability. Required Qualifications: Bachelor’s degree from a Tier-1 Engineering College in Computer Science, Engineering, or a related field. 2-5 years of relevant experience in ML/AI roles. Portfolio of projects or publications in AI/ML (GitHub, blogs, competitions, etc.) Passion for education, mentoring , and working with high school scholars. Excellent communication skills, with the ability to convey complex concepts to a diverse audience. Preferred Qualifications: Prior experience in student mentorship, teaching, or edtech. Exposure to Arduino, Raspberry Pi, or IoT for integrated AI/ML projects. Strong storytelling and documentation abilities to help scholars write compelling project reports and research summaries. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Job Title: AI Lead - Video & Image Analytics, GenAI & NLP (AWS) Location: Chennai Company: Datamoo AI About Us: Datamoo AI is an innovative AI-driven company focused on developing cutting-edge solutions in workforce and contract management, leveraging AI for automation, analytics, and optimization. We are building intelligent systems that enhance business efficiency through advanced AI models in video analytics, image processing, Generative AI, and NLP, deployed on AWS. Job Overview: We are seeking a highly skilled and experienced AI Lead to drive the development of our AI capabilities. This role requires expertise in video analytics, image analytics, Generative AI, and Natural Language Processing (NLP), along with hands-on experience in deploying AI solutions on AWS. The AI Lead will be responsible for leading a team of AI engineers, researchers, and data scientists, overseeing AI strategy, and ensuring the successful execution of AI-powered solutions. Key Responsibilities: Lead and mentor a team of AI engineers and data scientists to develop innovative AI-driven solutions. Design and implement AI models for video analytics, image processing, and NLP applications. Drive the development of Generative AI applications tailored to our product needs. Optimize and deploy AI/ML models on AWS using cloud-native services like SageMaker, Lambda, and EC2. Collaborate with cross-functional teams to integrate AI solutions into Datamoo AI’s workforce and contract management applications. Ensure AI solutions are scalable, efficient, and aligned with business objectives. Stay updated with the latest advancements in AI and ML and drive adoption of new technologies where applicable. Define AI research roadmaps and contribute to intellectual property development. Required Skills & Qualifications: 4+ years of experience in AI, ML, or Data Science with a focus on video/image analytics, NLP, and GenAI. Strong hands-on experience with deep learning frameworks such as TensorFlow, PyTorch, or OpenCV. Expertise in Generative AI, including transformer models (GPT, BERT, DALL·E, etc.). Proficiency in computer vision techniques, including object detection, recognition, and tracking. Strong experience in NLP models, including text summarization, sentiment analysis, and chatbot development. Proven track record of deploying AI solutions on AWS (SageMaker, EC2, Lambda, S3, etc.). Strong leadership skills with experience in managing AI/ML teams. Proficiency in Python, SQL, and cloud computing architectures. Excellent problem-solving skills and ability to drive AI strategy and execution. Preferred Qualifications: Experience with MLOps, model monitoring, and AI governance. Knowledge of blockchain and AI-powered contract management systems. Understanding of edge AI deployment for real-time analytics. Published research papers or contributions to open-source AI projects. What We Offer: Opportunity to lead AI innovation in a fast-growing AI startup. Collaborative work environment with cutting-edge AI technologies. Competitive salary and stock options. Flexible work environment (remote/hybrid options available). Access to AI research conferences and continuous learning programs. If you are an AI expert passionate about pushing the boundaries of AI and leading a dynamic team, we’d love to hear from you! How to Apply: Send your resume and a cover letter to hr@datamoo.ai. Show more Show less
Posted 1 week ago
15.0 - 20.0 years
15 - 20 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Job description Location: Pan India Grade: E1 The Opportunity: Capgemini is seeking a Director/ Senior Director level Executive for AWS Practice Lead. This person should have: 15+ years of experience with at least 10 in Data and Analytics Domain of which minimum 3 years on big data and Cloud Multi-skilled professional with strong experience in Architecture and Advisory, Offer and Asset creation, People hiring and training. Experience on at least 3 sizeable AWS engagements spanning over 18 months as a Managing Architect / Advisor preferably both migration and cloud native implementations. Hands-on experience on at least 4 native services like EMR, S3, Glue, Lambda, RDS, Redshift, Sagemaker, Quicksight, Athena, Kinesis. Client facing with strong communication and articulation skills. Should be able engage with CXO level audiences. Must be hands-on in writing solutions, doing estimations in support of RFPs. Strong in Data Architecture and management DW, Data Lake, Data Governance, MDM. Able to translate business and technical requirements into Architectural components Nice to have: Multi-skilled professional with strong experience in Deal solutioning, Creating GTM strategy, Delivery handholding Must be aware of relevant leading tools and concepts in industry Must be flexible for short-term travel up to 3 months across countries Must be able to define new service offerings and support GTM strategy Must have exposure to initial setup of activities, including infrastructure setup like connectivity, security policies, configuration management, Devops etc. Architecture certification preferred - TOGAF or other Industry acknowledges Experience with replication, high availability, archiving, backup & restore, and disaster recovery/business continuity data best practices. Our Ideal Candidate: Strong Behavioral & Collaboration Skills Excellent verbal and written communication skills. Should be and good listener, logical and composed in explaining point of views. Ability to work in collaborative, cross-functional, and multi-cultural teams. Excellent leadership skills, with the ability to generate stakeholder buy-in and lead through influence at a senior management level. Should be very good in negotiation skills and ability to handle conflict situations.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
🔧 Job Opening: MLOps & DevOps Engineer 📍 Location : Pune, India | 🧠 Experience : 3–5 Years 🕒 Immediate Joiners Preferred Company : Asmadiya Technologies Pvt. Ltd. About Us: Asmadiya Technologies is a dynamic technology company delivering innovative solutions in AI/ML , Cloud Computing , and Digital Transformation . We are seeking an experienced MLOps & DevOps Engineer to join our engineering team and lead the development, deployment, and monitoring of machine learning systems in production. ✅ Key Responsibilities: Design and implement CI/CD pipelines for ML models and microservices Manage end-to-end MLOps workflows – from model training, versioning, deployment, to monitoring Automate infrastructure using Terraform, CloudFormation , or similar tools Integrate ML workflows with cloud platforms like AWS, Azure, or GCP Implement model registry , artifact tracking, and monitoring (e.g., MLflow, Weights & Biases) Collaborate with Data Scientists, ML Engineers, and DevOps teams to ensure scalable and reliable deployments Set up and maintain Kubernetes (EKS/AKS/GKE) clusters and orchestrate model serving Ensure compliance, security, and reliability of deployed systems Proactively identify performance bottlenecks and implement optimization 🧩 Required Skills & Experience: 3–5 years of experience in DevOps and MLOps environments Strong expertise in Docker, Kubernetes, Jenkins/GitHub Actions/GitLab CI Hands-on experience with ML lifecycle tools : MLflow, Kubeflow, SageMaker, or similar Proficient in scripting with Python , Bash , and using Linux-based systems Experience with infrastructure-as-code (Terraform, Ansible, Helm) Working knowledge of cloud platforms (AWS preferred) and model deployment at scale Familiar with observability tools like Prometheus, Grafana, or ELK stack 🌟 What We’re Looking For: A problem-solver who can bridge the gap between data science and operations A hands-on contributor who can own deployments end-to-end A collaborative team player who thrives in fast-paced, agile environments Passionate about building production-ready ML systems 📬 Apply Now: Looking to lead the charge in deploying intelligent systems at scale? Join Asmadiya Technologies as we build the future of AI. 📩 Send your resume to careers@asmadiya.com with subject: MLOps & DevOps Engineer – Pune Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a seasoned and visionary Lead AI Engineer to drive the design, development, and delivery of high-impact AI/ML solutions. As a technical leader, you will guide a team of AI developers in executing large-scale AI/ML projects, mentor them to build expertise, and foster a culture of innovation and excellence. You will collaborate with sales teams during pre-sales calls to articulate technical solutions and work closely with leadership to translate strategic vision into actionable, production-ready AI systems. Responsibilities Architect and lead the end-to-end development of impactful AI/ML models and systems, ensuring scalability, reliability, and performance. Provide hands-on technical guidance to a team of AI/ML engineers, fostering skill development and promoting best practices in coding, model design, and experimentation. Collaborate with cross-functional teams, including data scientists, product managers, and software developers, to define AI product strategies and roadmaps. Partner with the sales team during pre-sales calls to understand client needs, propose AI-driven solutions, and communicate technical feasibility. Translate leadership’s strategic vision into technical requirements and executable project plans. Design and implement scalable MLOps infrastructure for data ingestion, model training, evaluation, deployment, and monitoring. Lead research and experimentation in advanced AI domains such as NLP, computer vision, large language models (LLMs), or generative AI, tailoring solutions to business needs. Evaluate and integrate open-source or commercial AI frameworks/tools to accelerate development and ensure robust solutions. Monitor and optimize deployed models for performance, fairness, interpretability, and cost-efficiency, driving continuous improvement. Mentor and nurture new talent, building a high-performing AI team capable of delivering complex projects over time. Qualifications Bachelor’s, Master’s, or Ph.D. in Computer Science, Artificial Intelligence, or a related field. 5+ years of hands-on experience in machine learning or deep learning, with a proven track record of delivering large-scale AI/ML projects to production. Demonstrated ability to lead and mentor early-career engineers, fostering technical growth and team collaboration. Strong proficiency in Python and ML frameworks/libraries (e.g., TensorFlow, PyTorch, HuggingFace, Scikit-learn). Extensive experience deploying AI models in production environments using tools like AWS SageMaker, Google Vertex AI, Docker, Kubernetes, or similar. Solid understanding of data pipelines, APIs, MLOps practices, and software engineering principles. Experience collaborating with non-technical stakeholders (e.g., sales, leadership) to align technical solutions with business objectives. Familiarity with advanced AI domains such as NLP, computer vision, LLMs, or generative AI is a plus. Excellent communication skills to articulate complex technical concepts to diverse audiences, including clients and executives. Strong problem-solving skills, with a proactive approach to driving innovation and overcoming challenges. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Mohali, Punjab
On-site
Company: Chicmic Studios Job Role: Python Machine Learning & AI Developer Experience Required: 3+ Years We are looking for a highly skilled and experienced Python Developer to join our dynamic team. The ideal candidate will have a robust background in developing web applications using Django and Flask, with expertise in deploying and managing applications on AWS. Proficiency in Django Rest Framework (DRF), a solid understanding of machine learning concepts, and hands-on experience with tools like PyTorch, TensorFlow, and transformer architectures are essential. Key Responsibilities Develop and maintain web applications using Django and Flask frameworks. Design and implement RESTful APIs using Django Rest Framework (DRF) Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. Build and integrate APIs for AI/ML models into existing systems. Create scalable machine learning models using frameworks like PyTorch , TensorFlow , and scikit-learn . Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. Deploy and manage machine learning models in production environments using tools like TensorFlow Serving , TorchServe , and AWS SageMaker . Ensure the scalability, performance, and reliability of applications and deployed models. Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. Write clean, maintainable, and efficient code following best practices. Conduct code reviews and provide constructive feedback to peers. Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. 3+ years of professional experience as a Python Developer. Proficient in Python with a strong understanding of its ecosystem. Extensive experience with Django and Flask frameworks. Hands-on experience with AWS services for application deployment and management. Strong knowledge of Django Rest Framework (DRF) for building APIs. Expertise in machine learning frameworks such as PyTorch , TensorFlow , and scikit-learn . Experience with transformer architectures for NLP and advanced AI solutions. Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). Familiarity with MLOps practices for managing the machine learning lifecycle. Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. Excellent problem-solving skills and the ability to work independently and as part of a team. Strong communication skills and the ability to articulate complex technical concepts to non-technical stakeholders. Contact : 9875952836 Office Location: F273, Phase 8b Industrial Area Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person
Posted 1 week ago
0.0 - 16.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore Position ID: J0625-0079 Employment Type: Full Time Position Description: Company Profile: Founded in 1976, CGI is among the largest independent IT and business consulting services firms in the world. With 94,000 consultants and professionals across the globe, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position: Manage Consulting Expert- AI Architect Experience: 13-16 years Category: Software Development/ Engineering Shift Timing: General Shift Location: Bangalore Position ID: J0625-0079 Employment Type: Full Time Education Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 13 years of relevant experience. We are looking for an experienced and visionary AI Architect with a strong engineering background and hands-on implementation experience to lead the development and deployment of AI-powered solutions. The ideal candidate will have a minimum of 13–16 years of experience in software and AI systems design, including extensive exposure to large language models (LLMs), vector databases, and modern AI frameworks such as LangChain. This role requires a balance of strategic architectural planning and tactical engineering execution, working across teams to bring intelligent applications to life. Your future duties and responsibilities: Design robust, scalable architectures for AI/ML systems, including LLM-based and generative AI solutions. Lead the implementation of AI features and services in enterprise-grade products with clear, maintainable code. Develop solutions using LangChain, orchestration frameworks, and vector database technologies. Collaborate with product managers, data scientists, ML engineers, and business stakeholders to gather requirements and translate them into technical designs. Guide teams on best practices for AI system integration, deployment, and monitoring. Define and implement architecture governance, patterns, and reusable frameworks for AI applications. Stay current with emerging AI trends, tools, and methodologies to continuously enhance architecture strategy. Oversee development of Proof-of-Concepts (PoCs) and Minimum Viable Products (MVPs) to validate innovative ideas. Ensure systems are secure, scalable, and high-performing in production environments. Mentor junior engineers and architects to build strong AI and engineering capabilities within the team. Required qualifications to be successful in this role: Must to have Skills- 13–16 years of overall experience in software development, with at least 5+ years in AI/ML system architecture and delivery. Proven expertise in developing and deploying AI/ML models in production environments. Deep knowledge of LLMs, LangChain, prompt engineering, RAG (retrieval-augmented generation), and vector search. Strong programming and system design skills with a solid engineering foundation. Exceptional ability to communicate complex concepts clearly to technical and non-technical stakeholders. Experience with Agile methodologies and cross-functional team leadership. Programming Languages: Python, Java, Scala, SQL AI/ML Frameworks: LangChain, TensorFlow, PyTorch, Scikit-learn, Hugging Face Transformers Data Processing: Apache Spark, Kafka, Pandas, Dask Vector Stores & Retrieval Systems: FAISS, Pinecone, Weaviate, Chroma Cloud Platforms: AWS (SageMaker, Lambda), Azure (ML Studio, OpenAI), Google Cloud AI MLOps & DevOps: Docker, Kubernetes, MLflow, Kubeflow, Airflow, CI/CD tools (GitHub Actions, Jenkins) Databases: PostgreSQL, MongoDB, Redis, BigQuery, Snowflake Tools & Platforms: Databricks, Jupyter Notebooks, Git, Terraform Good to have Skills- Solution Engineering and Implementation Experience in AI Project. Skills: AWS Machine Learning English GitHub Python Jenkins Kubernetes Prometheus Snowflake What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 week ago
16.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 week ago
16.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. What Primary Responsibilities: Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment How Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 4+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. Show more Show less
Posted 1 week ago
6.0 - 11.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Looking for a skilled Senior Data Science Engineer with 6-12 years of experience to lead the development of advanced computer vision models and systems. The ideal candidate will have hands-on experience with state-of-the-art architectures and a deep understanding of the complete ML lifecycle. This position is based in Bengaluru. Roles and Responsibility Lead the development and implementation of computer vision models for tasks such as object detection, tracking, image retrieval, and scene understanding. Design and execute end-to-end pipelines for data preparation, model training, evaluation, and deployment. Perform fine-tuning and transfer learning on large-scale vision-language models to meet application-specific needs. Optimize deep learning models for edge inference (NVIDIA Jetson, TensorRT, OpenVINO) and real-time performance. Develop scalable and maintainable ML pipelines using tools such as MLflow, DVC, and Kubeflow. Automate experimentation and deployment processes using CI/CD workflows. Collaborate cross-functionally with MLOps, backend, and product teams to align technical efforts with business needs. Monitor, debug, and enhance model performance in production environments. Stay up-to-date with the latest trends in CV/AI research and rapidly prototype new ideas for real-world use. Job Requirements 6-7+ years of hands-on experience in data science and machine learning, with at least 4 years focused on computer vision. Strong experience with deep learning frameworks: PyTorch (preferred), TensorFlow, Hugging Face Transformers. In-depth understanding and practical experience with Class-incremental learning and lifelong learning systems. Proficient in Python, including data processing libraries like NumPy, Pandas, and OpenCV. Strong command of version control and reproducibility tools (e.g., MLflow, DVC, Weights & Biases). Experience with training and optimizing models for GPU inference and edge deployment (Jetson, Coral, etc.). Familiarity with ONNX, TensorRT, and model quantization/conversion techniques. Demonstrated ability to analyze and work with large-scale visual datasets in real-time or near-real-time systems. Experience working in fast-paced startup environments with ownership of production AI systems. Exposure to cloud platforms such as AWS (SageMaker, Lambda), GCP, or Azure for ML workflows. Experience with video analytics, real-time inference, and event-based vision systems. Familiarity with monitoring tools for ML systems (e.g., Prometheus, Grafana, Sentry). Prior work in domains such as retail analytics, healthcare, or surveillance/IoT-based CV applications. Contributions to open-source computer vision libraries or publications in top AI/ML conferences (e.g., CVPR, NeurIPS, ICCV). Comfortable mentoring junior engineers and collaborating with cross-functional stakeholders.
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: DevOps Engineer – AI/ML Infrastructure Location: Meril Healthcare Pvt. Ltd, IITM research park, Chennai, Parent company – Meril (https://www.merillife.com/) . Shift: General shift - Monday to Saturday (9.30 am to 6.00 pm). Summary We are seeking a skilled DevOps Engineer with expertise in managing cloud-based AI/ML infrastructure, automation, CI/CD pipelines, and containerized deployments. The ideal candidate will work on AWS-based AI model deployment , database management , API integrations , and scalable infrastructure for AI inference workloads. Experience in ML model serving (MLflow, TensorFlow Serving, Triton Inference Server, BentoML) and on- prem/cloud DevOps will be highly valued. Key Responsibilities Cloud s Infrastructure Management Manage and optimize cloud infrastructure on AWS (SageMaker, EC2, Lambda, RDS, DynamoDB, S3, CloudFormation) . Design, implement, and maintain highly available, scalable AI/ML model deployment pipelines . Set up Infrastructure as Code (IaC) using Terraform, CloudFormation, or Ansible . CI/CD s Automation Develop and manage CI/CD pipelines using GitLabCI/CD, Jenkins, and AWS CodeBuild . Automate deployment of AI models and applications using Docker, Kubernetes (EKS) . Write automation scripts in Bash, Python, or PowerShell for system tasks. APIs AI Model Deployment Deploy and manage Flask/FastAPI-based APIs for AI inference. Optimize ML model serving using MLflow, TensorFlow Serving, Triton Inference Server, and BentoML . Implement monitoring for AI workloads to ensure inference reliability and performance. Security, Monitoring s Logging Implement AWS security best practices (IAM, VPC,Security Groups, Access Controls) . Monitor infrastructure using Prometheus, Grafana, CloudWatch, or ELK Stack . Set up backup and disaster recovery strategies for databases, storage, and models . Database s Storage Management Maintain and optimize MySQL (RDS) and MongoDB (DynamoDB) databases . Handle structured (RDS) and unstructured (S3, DynamoDB) AI data storage . Improve data synchronization between AI models, applications, and web services . On-Prem s Hybrid Cloud Integration (Optional) Manage on-prem AI workloads with GPU acceleration . Optimize AI workloads across cloud and edge devices . Required Skills and Qualifications 3 to 5 years of experience in DevOps, Cloud Infrastructure, or AI/ML Ops . Expertise in AWS (SageMaker, EC2, Lambda, RDS, DynamoDB, S3) . Experience with Docker s Kubernetes (EKS) for container orchestration. Proficiency in CI/CD tools (Jenkins, GitLab CI/CD, AWS CodeBuild) . Strong scripting skills in Bash, Python, or PowerShell . Knowledge of Linux ecosystem (Ubuntu, RHEL, CentOS) . Hands-on experience with ML model deployment (MLflow, TensorFlow Serving, Triton, BentoML) . Strong understanding of networking, security, and monitoring . Experience with database management (MySQL, PostgreSQL, MongoDB) . Preferred Skills AWS Certified DevOps Engineer, CKA (Kubernetes), or Terraform certification . Experience with hybrid cloud (AWS + on-prem GPU servers) . Knowledge of edge AI deployment and real-time AI inference optimization . Interested , Please share your resume to priyadharshini.sridhar@merillife.com Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsibilities Manage Data: Extract, clean, and structure both structured and unstructured data. Coordinate Pipelines: Utilize tools such as Airflow, Step Functions, or Azure Data Factory to orchestrate data workflows. Deploy Models: Develop, fine-tune, and deploy models using platforms like SageMaker, Azure ML, or Vertex AI. Scale Solutions: Leverage Spark or Databricks to handle large-scale data processing tasks. Automate Processes: Implement automation using tools like Docker, Kubernetes, CI/CD pipelines, MLFlow, Seldon, and Kubeflow. Collaborate Effectively: Work alongside engineers, architects, and business stakeholders to address and resolve real-world problems efficiently. Qualifications 3+ years of hands-on experience in MLOps (4-5 years of overall software development experience). Extensive experience with at least one major cloud provider (AWS, Azure, or GCP). Proficiency in using Databricks, Spark, Python, SQL, TensorFlow, PyTorch, and Scikit-learn. Expertise in debugging Kubernetes and creating efficient Dockerfiles. Experience in prototyping with open-source tools and scaling solutions effectively. Strong analytical skills, humility, and a proactive approach to problem-solving. Preferred Qualifications Experience with SageMaker, Azure ML, or Vertex AI in a production environment. Commitment to writing clean code, creating clear documentation, and maintaining concise pull requests. Skills: sql,kubeflow,spark,docker,databricks,ml,gcp,mlflow,kubernetes,aws,pytorch,azure,ci/cd,tensorflow,scikit-learn,seldon,python,mlops Show more Show less
Posted 1 week ago
5.0 years
4 - 7 Lacs
Thiruvananthapuram
On-site
Techvantage.ai is a next-generation technology and product engineering company at the forefront of innovation in Generative AI, Agentic AI , and autonomous intelligent systems . We build intelligent, cutting-edge solutions designed to scale and evolve with the future of artificial intelligence. Role Overview: We are looking for a skilled and versatile AI Infrastructure Engineer (DevOps/MLOps) to build and manage the cloud infrastructure, deployment pipelines, and machine learning operations behind our AI-powered products. You will work at the intersection of software engineering, ML, and cloud architecture to ensure that our models and systems are scalable, reliable, and production-ready. What we are looking from an ideal candidate? Design and manage CI/CD pipelines for both software applications and machine learning workflows. Deploy and monitor ML models in production using tools like MLflow, SageMaker, Vertex AI, or similar. Automate the provisioning and configuration of infrastructure using IaC tools (Terraform, Pulumi, etc.). Build robust monitoring, logging, and alerting systems for AI applications. Manage containerized services with Docker and orchestration platforms like Kubernetes . Collaborate with data scientists and ML engineers to streamline model experimentation, versioning, and deployment. Optimize compute resources and storage costs across cloud environments (AWS, GCP, or Azure). Ensure system reliability, scalability, and security across all environments. Preferred Skills: What skills do you need? 5+ years of experience in DevOps, MLOps , or infrastructure engineering roles. Hands-on experience with cloud platforms ( AWS, GCP, or Azure ) and services related to ML workloads. Strong knowledge of CI/CD tools (e.g., GitHub Actions, Jenkins, GitLab CI). Proficiency in Docker , Kubernetes , and infrastructure-as-code frameworks. Experience with ML pipelines , model versioning, and ML monitoring tools. Scripting skills in Python , Bash , or similar for automation tasks. Familiarity with monitoring/logging tools (Prometheus, Grafana, ELK, CloudWatch, etc.). Understanding of ML lifecycle management and reproducibility. Preferred Qualifications: Experience with Kubeflow , MLflow , DVC , or Triton Inference Server . Exposure to data versioning , feature stores , and model registries . Certification in AWS/GCP DevOps or Machine Learning Engineering is a plus. Background in software engineering, data engineering, or ML research is a bonus. What We Offer: Work on cutting-edge AI platforms and infrastructure Cross-functional collaboration with top ML, research, and product teams Competitive compensation package – no constraints for the right candidate
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2