Jobs
Interviews

1576 Sagemaker Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the Job We are seeking a highly skilled AI/ML Engineer with expertise in AWS AI/ML services and a strong understanding of Generative AI using Amazon Bedrock. The ideal candidate will have experience in building, deploying, and optimizing AI/ML models on AWS, integrating LLMs into applications, and leveraging AWS services for scalable AI solutions. Experience Required - 4+ years Key Responsibilities Design, develop, and deploy AI/ML models on AWS, leveraging SageMaker, Bedrock, and related services. Build LLM-based applications using Amazon Bedrock and fine-tune models for specific use cases. Implement RAG (Retrieval-Augmented Generation) and integrate vector databases like OpenSearch, Pinecone, or FAISS. Develop scalable, production-ready ML pipelines using AWS services (Lambda, Step Functions, S3, DynamoDB, etc.). Utilize Bedrock, SageMaker, and custom fine-tuned models to deliver business-driven AI solutions. Work with cross-functional teams to integrate ML models into real-world applications. Ensure AI solutions adhere to best practices for security, compliance, and cost optimization. Stay updated with the latest trends in GenAI, prompt engineering, and AI model optimization. Required Skills Strong expertise in AWS AI/ML stack – Amazon Bedrock, SageMaker, Lambda, Step Functions, S3, DynamoDB, etc. Experience with Generative AI models (GPT, Claude, Mistral, LLaMA, etc.) and fine-tuning techniques. Hands-on experience in Python, TensorFlow, PyTorch, or Hugging Face. Knowledge of vector databases and embedding models. Experience in building secure and scalable AI applications using AWS. Familiarity with MLOps practices, CI/CD for ML models, and cloud automation. Strong problem-solving skills and ability to work in a fast-paced environment. Good to Have Experience with LangChain, Prompt Engineering, and RAG techniques. Understanding of data governance, AI ethics, and responsible AI practices. Certification in AWS Machine Learning Specialty/ Associate or relevant AI certifications. Relevant Skills vector databases, models, mlops, ml, pytorch, s3, lambda, tensorflow, ci/cd, amazon Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : AWS Administration Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Responsibilities: Evaluate and source appropriate cloud infrastructure solutions for machine learning needs, ensuring cost-effectiveness and scalability based on project requirements. Automate and manage the deployment of machine learning models into production environments, ensuring version control for models and datasets using tools like Docker and Kubernetes. Set up monitoring tools to track model performance and data drift, conduct regular maintenance, and implement updates for production models. Work closely with data scientists, software engineers, and stakeholders to align on project goals, facilitate knowledge sharing, and communicate findings and updates to cross-functional teams. Design, implement, and maintain scalable ML infrastructure, optimizing cloud and on-premise resources for training and inference. Document ML processes, pipelines, and best practices while preparing reports on model performance, resource utilization, and system issues. Provide training and support for team members on ML Ops tools and methodologies, and stay updated on industry trends and emerging technologies. Diagnose and resolve issues related to model performance, infrastructure, and data quality, implementing solutions to enhance model robustness and reliability. Education, Technical Skills & Other Critical Requirement: 10+ years of relevant experience in AI/ analytics product & solution delivery Bachelor’s/master’s degree in an information technology/computer science/ Engineering or equivalent fields experience. Proficiency in frameworks such as TensorFlow, PyTorch, or Scikit-learn. Strong skills in Python and/or R; familiarity with Java, Scala, or Go is a plus. Experience with cloud services such as AWS, Azure, or Google Cloud Platform, particularly in ML services (e.g., AWS SageMaker, Azure ML). CI/CD tools (e.g., Jenkins, GitLab CI), containerization (e.g., Docker), and orchestration (e.g., Kubernetes). Experience with databases (SQL and NoSQL), data pipelines, ETL processes, ML pipeline orchestration (Airflow) Familiarity with monitoring and logging tools such as Prometheus, Grafana, or ELK stack. Proficient in using Git for version control. Strong analytical and troubleshooting abilities to diagnose and resolve issues effectively. Good communication skills for working with cross-functional teams and conveying technical concepts to non-technical stakeholders. Ability to manage multiple projects and prioritize tasks in a fast-paced environment. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

🚨 We are Hiring 🚨 https://grhombustech.com/jobs/job-description-senior-test-automation-lead-playwright-ai-ml-focus/ Job Description Job Title: Senior Test Automation Lead – Playwright (AI/ML Focus) Location: Hyderabad Experience: 10 - 12 years Job Type: Full-Time Company Overview: GRhombus Technologies Pvt Ltd, a pioneer in Software Solutions – Especially on Test Automation, Cyber Security, Full Stack Development, DevOps, Salesforce, Performance Testing and Manual Testing. GRhombus delivery centres are located in India at Hyderabad, Chennai, Bengaluru and Pune. In the Middle East, we are located in Dubai. Our partner offices are located in the USA and the Netherlands. About the Role: We are seeking a passionate and technically skilled Senior Test Automation Lead with deep experience in Playwright-based frameworks and a solid understanding of AI/ML-driven applications. In this role, you will lead the automation strategy and quality engineering practices for next-generation AI products that integrate large-scale machine learning models, data pipelines, and dynamic, intelligent UIs. You will define, architect, and implement scalable automation solutions across AI-enhanced features such as recommendation engines, conversational UIs, real-time analytics, and predictive workflows, ensuring both functional correctness and intelligent behavior consistency. Key Responsibilities: Test Automation Framework Design & Implementation Design and implement robust, modular, and extensible Playwright automation frameworks using TypeScript/JavaScript. Define automation design patterns and utilities that can handle complex AI-driven UI behaviors (e.g., dynamic content, personalization, chat interfaces). Implement abstraction layers for easy test data handling, reusable components, and multi-browser/platform execution. AI/ML-Specific Testing Strategy Partner with Data Scientists and ML Engineers to understand model behaviors, inference workflows, and output formats. Develop strategies for testing non-deterministic model outputs (e.g., chat responses, classification labels) using tolerance ranges, confidence intervals, or golden datasets. Design tests to validate ML integration points: REST/gRPC API calls, feature flags, model versioning, and output accuracy. Include bias, fairness, and edge-case validations in test suites where applicable (e.g., fairness in recommendation engines or NLP sentiment analysis). End-to-End Test Coverage Lead the implementation of end-to-end automation for: Web interfaces (React, Angular, or other SPA frameworks) Backend services (REST, GraphQL, WebSockets) ML model integration endpoints (real-time inference APIs, batch pipelines) Build test utilities for mocking, stubbing, and simulating AI inputs and datasets. CI/CD & Tooling Integration Integrate automation suites into CI/CD pipelines using GitHub Actions, Jenkins, GitLab CI, or similar. Configure parallel execution, containerized test environments (e.g., Docker), and test artifact management. Establish real-time dashboards and historical reporting using tools like Allure, ReportPortal, TestRail, or custom Grafana integrations. Quality Engineering & Leadership Define KPIs and QA metrics for AI/ML product quality: functional accuracy, model regression rates, test coverage %, time-to-feedback, etc. Lead and mentor a team of automation and QA engineers across multiple projects. Act as the Quality Champion across the AI platform by influencing engineering, product, and data science teams on quality ownership and testing best practices. Agile & Cross-Functional Collaboration Work in Agile/Scrum teams; participate in backlog grooming, sprint planning, and retrospectives. Collaborate across disciplines: Frontend, Backend, DevOps, MLOps, and Product Management to ensure complete testability. Review feature specs, AI/ML model update notes, and data schemas for impact analysis. Required Skills and Qualifications: Technical Skills: Strong hands-on expertise with Playwright (TypeScript/JavaScript). Experience building custom automation frameworks and utilities from scratch. Proficiency in testing AI/ML-integrated applications: inference endpoints, personalization engines, chatbots, or predictive dashboards. Solid knowledge of HTTP protocols, API testing (Postman, Supertest, RestAssured). Familiarity with MLOps and model lifecycle management (e.g., via MLflow, SageMaker, Vertex AI). Experience in testing data pipelines (ETL, streaming, batch), synthetic data generation, and test data versioning. Domain Knowledge: Exposure to NLP, CV, recommendation engines, time-series forecasting, or tabular ML models. Understanding of key ML metrics (precision, recall, F1-score, AUC), model drift, and concept drift. Knowledge of bias/fairness auditing, especially in UI/UX contexts where AI decisions are shown to users. Leadership & Communication: Proven experience leading QA/Automation teams (4+ engineers). Strong documentation, code review, and stakeholder communication skills. Experience collaborating in Agile/SAFe environments with cross-functional teams. Preferred Qualifications: Experience with AI Explainability frameworks like LIME, SHAP, or What-If Tool. Familiarity with Test Data Management platforms (e.g., Tonic.ai, Delphix) for ML training/inference data. Background in performance and load testing for AI systems using tools like Locust, JMeter, or k6. Experience with GraphQL, Kafka, or event-driven architecture testing. QA Certifications (ISTQB, Certified Selenium Engineer) or cloud certifications (AWS, GCP, Azure). Education: Bachelor’s or Master’s degree in Computer Science, Software Engineering, or related technical discipline. Bonus for certifications or formal training in Machine Learning, Data Science, or MLOps. Why Join Us? At GRhombus, we are redefining quality assurance and software testing with cutting-edge methodologies and a commitment to innovation. As a test automation lead, you will play a pivotal role in shaping the future of automated testing, optimizing frameworks, and driving efficiency across our engineering ecosystem. Be part of a workplace that values experimentation, learning, and professional growth. Contribute to an organisation where your ideas drive innovation and make a tangible impact. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : AWS Architecture Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code 4. quality. 5. Collaborate with data scientists and analysts to implement data processing pipelines 6. Participate in architecture discussions and contribute to technical decision-making 7. Ensure the scalability, reliability, and performance of Python applications on AWS 8. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 month ago

Apply

7.5 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Lead Project Role Description : Develop and configure software systems either end-to-end or for a specific stage of product lifecycle. Apply knowledge of technologies, applications, methodologies, processes and tools to support a client, project or entity. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

India

Remote

Role: Data Science Developer Location : Remote Responsibilities : Develop and productionize cloud-based services and full-stack applications utilizing NLP solutions, including GenAI models. Implement and manage CI/CD pipelines to ensure efficient and reliable software delivery. Automate cloud infrastructure using Terraform. Write unit tests, integration tests and performance tests Work in a team environment using agile practices Support administration of Data Science experimentation environment including AWS Sagemaker and Nvidia GPU servers Monitor and optimize application performance and infrastructure costs. Collaborate with data scientists and other developers to integrate and deploy data science models into production environments Educate others to improve and coding standards, code quality and test coverage, documentation Work closely with cross-functional teams to ensure seamless integration and operation of services. What We’re Looking For : Basic Required Qualifications : 5-8 years of experience in software engineering Proficiency in Python and JavaScript for full-stack development . Experience in writing and maintaining high quality code – utilizing techniques like unit testing and code reviews Strong understanding of object-oriented design and programming concepts Strong experience with AWS cloud services, including EKS, Lambda, and S3 . Knowledge of Docker containers and orchestration tools including Kubernetes Experience with monitoring, logging, and tracing tools (e.g., Datadog, Kibana, Grafana ). Knowledge of message queues and event-driven architectures (e.g., AWS SQS, Kafka). Experience with CI/CD pipelines in Azure DevOps and GitHub Actions . Additional Preferred Qualifications : Experience writing front-end web applications using Javascript and React Familiarity with infrastructure as code (IaC) using Terraform. Experience in Azure or GPC cloud services Proficiency in C# or Java Experience with SQL and NoSQL databases Knowledge of Machine Learning concepts Experience with Large Language Models Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming. 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 5 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Sr. Full Stack Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 5 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 5 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As an IoT Engineer with Python expertise, you will develop data-driven applications on AWS IoT for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Collaborate with data scientists and analysts to implement data processing pipelines 4. Participate in architecture discussions and contribute to technical decision-making 5. Ensure the scalability, reliability, and performance of Python applications on AWS 6. Stay current with Python ecosystem developments, AWS services, and industry best practices. Professional & Technical Skills: 1. At least 3 years of experience in Python Programming with integration with AWS IoT core. 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale. Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : Bachelor of Engineering in Electronics or any related stream Summary: As a Sr. Backend Engineer, you will develop data-driven applications on AWS for the client. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. At least 3 years of experience in Python Programming with Web framework expertise (Django, Flask, or FastAPI). 2. Exposure on database technologies (SQL and NoSQL) and API development. 3. Significant experience working with AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) and Infrastructure as Code (e.g., AWS CloudFormation, Terraform). 4. Exposure on Test-Driven Development (TDD) 5. Practices DevOps in software solution and well-versed with Agile methodologies. 6. AWS certification is a plus. 7. Have well-developed analytical skills, a person who is rigorous but pragmatic, being able to justify decisions with solid rationale Additional Information: 1. The candidate should have a minimum of 7 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (bachelor’s degree in computer science, Software Engineering, or related field). Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : Python (Programming Language) Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As a Software Engineer with Python expertise, you will develop data-driven applications on AWS. Responsible for the creation of scalable data pipelines and algorithms to process and deliver actionable vehicle data insights. Roles & Responsibilities: 1. Lead the design and development of Python based applications and services 2. Architect and implement cloud-native solutions using AWS services 3. Mentor and guide the Python development team, promoting best practices and code quality 4. Collaborate with data scientists and analysts to implement data processing pipelines 5. Participate in architecture discussions and contribute to technical decision-making 6. Ensure the scalability, reliability, and performance of Python applications on AWS 7. Stay current with Python ecosystem developments, AWS services, and industry best practices Professional & Technical Skills: 1. Python Programming 2. Web framework expertise (Django, Flask, or FastAPI) 3. Data processing and analysis 4. Database technologies (SQL and NoSQL) 5. API development 6. Significant experience working with AWS Lambda 7. AWS services (e.g., EC2, S3, RDS, Lambda, SageMaker, EMR) with Any AWS certification is a plus. 8. Infrastructure as Code (e.g., AWS CloudFormation, Terraform) 9. Test-Driven Development (TDD) 10. DevOps practices 11. Agile methodologies. 12. Experience with big data technologies and data warehousing solutions on AWS (e.g., Redshift, EMR, Athena). 13. Strong knowledge of AWS platform and services (e.g., EC2, S3, RDS, Lambda, API Gateway, VPC, IAM). Additional Information: 1. The candidate should have a minimum of 3 years of experience in Python Programming. 2. This position is based at our Hyderabad office 3. A 15 years full time education is required (Bachelor of computer science, or any related stream. master’s degree preferred.) Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

On-site

WhizzHR is hiring Media Solution Architect – AI/ML & Automation Focus Role Summary: We are seeking a Media Solution Architect to lead the strategic design of AI-driven and automation-centric solutions across digital media operations. This role involves architecting intelligent, scalable systems that enhance efficiency across campaign setup, trafficking, reporting, QA, and billing processes. The ideal candidate will bring a strong blend of automation, AI/ML, and digital marketing expertise to drive innovation and operational excellence. Key Responsibilities: Identify and assess opportunities to apply AI/ML and automation across media operations workflows (e.g., intelligent campaign setup, anomaly detection in QA, dynamic taxonomy validation). Design scalable, intelligent architectures using a combination of machine learning models, RPA, Python-based automation, and media APIs (e.g., Meta, DV360, YouTube). Develop or integrate machine learning models for use cases such as performance prediction, media mix modeling, and anomaly detection in reporting or billing. Ensure adherence to best practices in data governance, compliance, and security, particularly around AI system usage. Partner with business stakeholders to prioritize high-impact AI/automation use cases and define clear ROI and success metrics. Stay informed on emerging trends in AI/ML and translate innovations into actionable media solutions. Ideal Profile: 7+ years of experience in automation, AI/ML, or data science, including 3+ years in marketing, ad tech, or digital media. Strong understanding of machine learning frameworks for predictive modeling, anomaly detection, and NLP-based insight generation. Proficiency in Python and libraries such as scikit-learn, TensorFlow, pandas, or PyTorch. Experience with cloud-based AI platforms (e.g., Google Vertex AI, Azure ML, AWS Sagemaker) and media API integrations. Ability to architect AI-enhanced automations that improve forecasting, QA, and decision-making in media operations. Familiarity with RPA tools (e.g., UiPath, Automation Anywhere); AI-first automation experience is a plus. Demonstrated success in developing or deploying ML models for campaign optimization, fraud detection, or process intelligence. Familiarity with digital media ecosystems such as Google Ads, Meta, TikTok, DSPs, and ad servers. Excellent communication and stakeholder management skills, with the ability to translate technical solutions into business value. Kindly share your Resume at Hello@whizzhr.com Show more Show less

Posted 1 month ago

Apply

7.5 years

0 Lacs

Pune, Maharashtra, India

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : AWS Glue Good to have skills : NA Minimum 7.5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As part of a Data Transformation programme you will be part of the Data Marketplace team. In this team you will be responsible for Architecture and design for automating data management compliance validation, monitoring, and reporting through rule-based and AI-driven mechanisms, integrating with metadata repositories and governance tools for real-time policy enforcement and for delivering design specifications for real-time metadata integration, enhanced automation, audit logging, monitoring capabilities, and lifecycle management (including version control, decommissioning, and rollback) Preferably experience with the implementation and adaptation of data management and data governance controls around Data Product implementations, preferably on AWS. Experience with AI appreciated. Examples skills – Data Architecture, Data Marketplace, Data governance, Data Engineering, AWS DataZone, AWS Sagemaker Unified Studio As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team in implementing effective solutions. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that all stakeholders are informed and involved in the decision-making process. Your role will require a balance of technical expertise and leadership skills to drive project success and foster a collaborative team environment. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Facilitate knowledge sharing sessions to enhance team capabilities. - Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Glue. - Strong understanding of data integration and ETL processes. - Experience with cloud computing platforms and services. - Familiarity with data warehousing concepts and best practices. - Ability to troubleshoot and optimize data workflows. Additional Information: - The candidate should have minimum 7.5 years of experience in AWS Glue. - This position is based in Pune. - A 15 years full time education is required. Show more Show less

Posted 1 month ago

Apply

20.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Staff AI Engineer - MLOps Company: Rapid7 Team: AI Center of Excellence Team Overview: Cross-functional team of Data Scientists and AI Engineers Mission: Leverage AI/ML to protect customer attack surfaces Partners with Detection and Response teams, including MDR Encourages creativity, collaboration, and research publication Uses 20+ years of threat analysis and growing patent portfolio Tech Stack: Cloud/Infra: AWS (SageMaker, Bedrock), EKS, Terraform Languages/Tools: Python, Jupyter, NumPy, Pandas, Scikit-learn ML Focus: Anomaly detection, unlabeled data Role Summary: Build and deploy ML production systems Manage end-to-end data pipelines and ensure data quality Implement ML guardrails and robust monitoring Deploy web apps and REST APIs with strong data security Share knowledge, mentor engineers, collaborate cross-functionally Embrace agile, iterative development Requirements: 8–12 years in Software Engineering (3+ in ML deployment on AWS) Strong in Python, Flask/FastAPI, API development Skilled in CI/CD, Docker, Kubernetes, MLOps, cloud AI tools Experience in data pre-processing, feature engineering, model monitoring Strong communication and documentation skills Collaborative mindset, growth-oriented problem-solving Preferred Qualifications: Experience with Java Background in the security industry Familiarity with AI/ML model operations, LLM experimentation Knowledge of model risk management (drift monitoring, hyperparameter tuning, registries) About Rapid7: Rapid7 is committed to securing the digital world through passion, collaboration, and innovation. With over 10,000 customers globally, it offers a dynamic, growth-focused workplace and tackles major cybersecurity challenges with diverse teams and a mission-driven approach. 4o Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Machine Learning Engineer (Python, AWS) We are seeking an experienced Machine Learning Engineer with 5+ years of hands-on experience in developing and deploying ML solutions. The ideal candidate will have strong Python programming skills and a proven track record working with AWS services for machine learning. Responsibilities: Design, develop, and deploy scalable machine learning models. Implement and optimize ML algorithms using Python. Leverage AWS services (e.g., Sagemaker, EC2, S3, Lambda) for ML model training, deployment, and monitoring. Collaborate with data scientists and other engineers to bring ML solutions to production. Ensure the performance, reliability, and scalability of ML systems. Qualifications: Bachelor's or master's degree in computer science, Engineering, or a related field. 5+ years of professional experience as a Machine Learning Engineer. Expertise in Python programming for machine learning. Strong experience with AWS services for ML (SageMaker, EC2, S3, Lambda, etc.). Solid understanding of machine learning algorithms and principles. Experience with MLOps practices is a plus. Show more Show less

Posted 1 month ago

Apply

2.0 - 5.0 years

0 Lacs

Delhi, India

Remote

Job Title: AI Engineer Location: Remote Employment Type: Full-time About the Role: We are seeking a skilled and motivated AI Engineer to help us build intelligent, agentic systems that drive real-world impact. In this role, you will develop, deploy, and maintain AI models and pipelines - working with large language models (LLMs), vector databases, and orchestration frameworks like Langchain. You will collaborate across teams to build robust, scalable AI-driven solutions. Key Responsibilities- Design and develop intelligent systems using LLMs, retrieval-augmented generation (RAG), and agentic frameworks. Build and deploy AI pipelines using Langchain, vector stores, and custom tools.- Integrate models with production APIs and backend systems. Monitor, fine-tune, and improve performance of deployed AI systems.- Collaborate with data engineers, product managers, and UX designers to deliver AI-first user experiences. Stay up to date with advancements in generative AI, LLMs, and orchestration frameworks. Required Qualifications 2-5 years of experience in building and deploying machine learning or AI-based systems. Hands-on experience with Langchain in building agent workflows or RAG pipelines. Proficiency in Python and frameworks such as PyTorch, TensorFlow, or Scikit-learn. Experience with cloud platforms (AWS, GCP, Azure) and containerization (Docker, Kubernetes). Strong understanding of prompt engineering, embeddings, and vector database operations (e.g., FAISS, Pinecone, Weaviate). Familiarity with MLOps tools such as MLflow, SageMaker, or Vertex AI. Preferred Qualifications Experience with large language models (e.g., GPT, Claude, LLaMA) and GenAI platforms (e.g., OpenAI, Bedrock, Anthropic). Background in NLP, RAG architectures, or autonomous agents. Experience in deploying AI applications via APIs and microservices. Contributions to open-source Langchain or GenAI ecosystems. Why Join Us? Remote-first company working on frontier AI systems. Opportunity to shape production-grade AI experiences used globally. Dynamic, collaborative, and intellectually curious team. Competitive compensation with fast growth pot Show more Show less

Posted 1 month ago

Apply

12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About the Role We are seeking a Director of Software Engineering to lead our engineering team. This role requires a strategic and hands-on leader with deep expertise in Java and Amazon Web Services (AWS) with experience in modernizing platforms, cloud native migrations and hybrid strategies. The ideal candidate will have a strong product mindset, extensive experience in building scalable cloud-native applications, and the ability to drive engineering excellence in a fast-paced environment. Key Responsibilities • Technical Leadership: Define and implement best practices for Java-based architectures and scalable backend systems. • Team Management: Lead, mentor, and grow a high-performing team of software engineers and engineering managers. • Cloud & Infrastructure: Design, deploy, and optimize AWS-based solutions, leveraging services like EC2, Lambda, S3, RDS, DynamoDB. • Performance & Scalability: Ensure high availability, security, and performance of distributed systems on AWS and in our data centers. • APIs: Architect, design and document Restful APIs as a product for both internal and external customers • Agile Development: Foster an engineering culture of excellence with focus on product delivery with quality and technological advantage • Technology Roadmap: Stay ahead of industry trends, identifying opportunities for modernization and innovation. • Stakeholder Collaboration: Work closely with leadership, product, and operation teams to align engineering efforts with business goals. Required Qualifications • Experience: 12+ years in software engineering, with at least 5 years in a leadership role. • Technical Expertise: • - Strong background in Java, JDK and its ecosystem • - Hands-on expertise in both data center and AWS architectures, deployments, and automation. • - Strong experience with SQL/NoSQL databases (Oracle, PostgreSQL, MySQL, DynamoDB). • - Proficiency in RESTful APIs, event-driven architecture (Kafka, SNS/SQS), and service design. • - Strong grasp of security best practices, IAM roles, and compliance standards on AWS. • Leadership & Strategy: Proven track record of scaling engineering teams and aligning technology with business goals. • Problem-Solving Mindset: Ability to diagnose complex technical issues and optimize outcomes. Preferred Qualifications • Experience in high-scale SaaS applications using Java and AWS. • Knowledge of AI/ML services on AWS (SageMaker, Bedrock) and data engineering pipelines. • Agile & DevOps: Experience implementing DevOps pipelines, CI/CD, and Infrastructure as Code (Terraform, CloudFormation). • Background in fintech, e-commerce, or enterprise software is a plus. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking a highly driven Client Services Manager – Data Science Lead to lead and deliver end-to-end data science solutions for pharmaceutical clients. This is a consulting-focused role requiring hands-on technical expertise, leadership, and the ability to translate business problems into data science solutions. Key Responsibilities: Lead a team of data scientists, analysts, and engineers on complex healthcare and pharma data projects. Partner with senior client stakeholders and contribute to technical solutioning and proposal discussions. Deliver full-lifecycle data science projects including data cleaning, feature engineering, model development, validation, and deployment. Apply advanced ML/statistical techniques such as regression models, neural networks, and NLP (including BERT). Stay up to date on AI/ML advancements and guide the team on new approaches. Ensure models are scalable, optimized, and implemented within sustainable frameworks. Preferred Experience & Skills: Strong programming skills in Python and SQL . Familiarity with relational and NoSQL databases such as Postgres and Redshift . Hands-on experience with AWS, Azure , and tools like Sagemaker and Athena. Experience working with healthcare data (e.g., HEOR, RWE, Claims data). Exposure to large language models (LLMs) like ChatGPT, and prompt engineering. Knowledge of visualization tools such as Tableau or Power BI is a plus. Previous experience in a consulting environment and building data science products is highly desirable. Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, or a related field from a reputed institution Experience: 7–10 years Preferred Industry: Healthcare, Pharma, Life Sciences Show more Show less

Posted 1 month ago

Apply

0.0 - 18.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indore, Madhya Pradesh, India Qualification : BTech degree in computer science, engineering or related field of study or 12+ years of related work experience 7+ years design & implementation experience with large scale data centric distributed applications Professional experience architecting, operating cloud-based solutions with good understanding of core disciplines like compute, networking, storage, security, databases etc. Good understanding of data engineering concepts like storage, governance, cataloging, data quality, data modeling etc. Good understanding about various architecture patterns like data lake, data lake house, data mesh etc. Good understanding of Data Warehousing concepts, hands-on experience working with tools like Hive, Redshift, Snowflake, Teradata etc. Experience migrating or transforming legacy customer solutions to the cloud. Experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, Data Zone etc. Thorough understanding of Big Data ecosystem technologies like Hadoop, Spark, Hive, HBase etc. and other competent tools and technologies Understanding in designing analytical solutions leveraging AWS cognitive services like Textract, Comprehend, Rekognition etc. in combination with Sagemaker is good to have. Experience working with modern development workflows, such as git, continuous integration/continuous deployment pipelines, static code analysis tooling, infrastructure-as-code, and more. Experience with a programming or scripting language – Python/Java/Scala AWS Professional/Specialty certification or relevant cloud expertise Skills Required : AWS, Big Data, Spark, Technical Architecture Role : Drive innovation within Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. Capable of leading a technology team, inculcating innovative mindset and enable fast paced deliveries. Able to adapt to new technologies, learn quickly, and manage high ambiguity. Ability to work with business stakeholders, attend/drive various architectural, design and status calls with multiple stakeholders. Exhibit good presentation skills with a high degree of comfort speaking with executives, IT Management, and developers. Drive technology/software sales or pre-sales consulting discussions Ensure end-to-end ownership of all tasks being aligned. Ensure high quality software development with complete documentation and traceability. Fulfil organizational responsibilities (sharing knowledge & experience with other teams / groups) Conduct technical training(s)/session(s), write whitepapers/ case studies / blogs etc. Experience : 10 to 18 years Job Reference Number : 12895

Posted 1 month ago

Apply

0.0 - 5.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Noida, Uttar Pradesh, India;Bangalore, Karnataka, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : Strong experience in Python 2+ years’ experience of working on feature/data pipelines using PySpark Understanding and experience around data science Exposure to AWS cloud services such as Sagemaker, Bedrock, Kendra etc. Experience with machine learning model lifecycle management tools, and an understanding of MLOps principles and best practice Experience with statistical models e.g., multinomial logistic regression Experience of technical architecture, design, deployment, and operational level knowledge Exploratory Data Analysis Knowledge around Model building, Hyperparameter tuning and Model performance metrics. Statistics Knowledge (Probability Distributions, Hypothesis Testing) Time series modelling, Forecasting, Image/Video Analytics, and Natural Language Processing (NLP). Good To Have: Experience researching and ing large language and Generative AI models. Experience with LangChain, LLAMAIndex, Foundation model tuning, Data Augmentation, and Performance Evaluation frameworks Able to provide analytical expertise in the process of model development, refining, and implementation in a variety of analytics problems. Knowledge on Docker and Kubernetes. Skills Required : Machine Learning, Natural Language Processing , AWS Sagemaker, Python Role : Generate actionable insights for business improvements. Ability to understand business requirements. Write clean, efficient, and reusable code following best practices. Troubleshoot and debug applications to ensure optimal performance. Write unit test cases Collaborate with cross-functional teams to define and deliver new features Use case derivation and solution creation from structured/unstructured data. Actively drive a culture of knowledge-building and sharing within the team Experience ing theoretical models in an applied environment. MLOps, Data Pipeline, Data engineering Statistics Knowledge (Probability Distributions, Hypothesis Testing) Experience : 4 to 5 years Job Reference Number : 13027

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Noida, Uttar Pradesh

On-site

Noida, Uttar Pradesh, India;Gurugram, Haryana, India;Indore, Madhya Pradesh, India;Bengaluru, Karnataka, India;Pune, Maharashtra, India;Hyderabad, Telangana, India Qualification : 2-4 years of experience in designing, developing, and training machine learning models using diverse algorithms and techniques, including deep learning, NLP, computer vision, and time series analysis. Proven ability to optimize model performance through experimentation with architectures, hyperparameter tuning, and evaluation metrics. Hands-on experience in processing large datasets, including preprocessing, feature engineering, and data augmentation. Demonstrated ability to deploy trained AI/ML models to production using frameworks like Kubernetes and cloud-based ML platforms Solid understanding of monitoring and logging for performance tracking. Experience in exploring new AI/ML methodologies and documenting the development and deployment lifecycle, including performance metrics. Familiarity with AWS services, particularly SageMaker, is expected. Excellent communication, presentation, and interpersonal skills are essential. Good to have: Knowledge of GenAI (LangChain, Foundation model tuning, and GPT3) Amazon AWS Certified Machine Learning - Specialty certifications Skills Required : Machine Learning, Langchain, AWS Sagemaker, Python Role : Explore different models and transform data science prototypes for given problem Analyze dataset perform data enrichment, feature engineering and model training Abale to write code using Python, Pandas and Dataframe APIs Develop machine learning applications according to requirements Perform statistical analysis and fine-tuning using test results Collaborate with data engineers & architects to implement and deploy scalable solutions. Encourage continuous innovation and out-of-the-box thinking. Experience ing theoretical models in an applied environment. Experience : 1 to 3 years Job Reference Number : 13047

Posted 1 month ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Job Information Number of Positions 1 Industry Engineering Date Opened 06/09/2025 Job Type Permanent Work Experience 2-3 years City Bangalore State/Province Karnataka Country India Zip/Postal Code 560037 Location Bangalore About Us CloudifyOps is a company with DevOps and Cloud in our DNA. CloudifyOps enables businesses to become more agile and innovative through a comprehensive portfolio of services that addresses hybrid IT transformation, Cloud transformation, and end-to-end DevOps Workflows. We are a proud Advance Partner of Amazon Web Services and have deep expertise in Microsoft Azure and Google Cloud Platform solutions. We are passionate about what we do. The novelty and the excitement of helping our customers accomplish their goals drives us to become excellent at what we do. Job Description Culture at CloudifyOps : Working at CloudifyOps is a rewarding experience! Great people, a work environment that thrives on creativity, and the opportunity to take on roles beyond a defined job description are just some of the reasons you should work with us. About the Role: We are seeking a proactive and technically skilled AI/ML Engineer with 2–3 years of experience to join our growing technology team. The ideal candidate will have hands-on expertise in AWS-based machine learning, Agentic AI, and Generative AI tools, especially within the Amazon AI ecosystem. You will play a key role in building intelligent, scalable solutions that address complex business challenges. Key Responsibilities: 1. AWS-Based Machine Learning Develop, train, and fine-tune ML models on AWS SageMaker, Bedrock, and EC2. Implement serverless ML workflows using Lambda, Step Functions, and EventBridge. Optimize models for cost/performance using AWS Inferentia/Trainium. 2. MLOps & Productionization Build CI/CD pipelines for ML using AWS SageMaker Pipelines, MLflow, or Kubeflow. Containerize models with Docker and deploy via AWS EKS/ECS/Fargate. Monitor models in production using AWS CloudWatch, SageMaker Model Monitor. 3. Agentic AI Development Design autonomous agent systems (e.g., AutoGPT, BabyAGI) for task automation. Integrate multi-agent frameworks (LangChain, AutoGen) with AWS services. Implement RAG (Retrieval-Augmented Generation) for agent knowledge enhancement. 4. Generative AI & LLMs Fine-tune and deploy LLMs (GPT-4, Claude, Llama 2/3) using LoRA/QLoRA. Build Generative AI apps (chatbots, content generators) with LangChain, LlamaIndex. Optimize prompts and evaluate LLM performance using AWS Bedrock/Amazon Titan. 5. Collaboration & Innovation Work with cross-functional teams to translate business needs into AI solutions. Collaborate with DevOps and Cloud Engineering teams to develop scalable, production-ready AI systems. Stay updated with cutting-edge AI research (arXiv, NeurIPS, ICML). 5. Governance & Documentation Implement model governance frameworks to ensure ethical AI/ML deployments. Design reproducible ML pipelines following MLOps best practices (versioning, testing, monitoring). Maintain detailed documentation for models, APIs, and workflows (Markdown, Sphinx, ReadTheDocs). Create runbooks for model deployment, troubleshooting, and scaling. Technical Skills Programming: Python (PyTorch, TensorFlow, Hugging Face Transformers). AWS: SageMaker, Lambda, ECS/EKS, Bedrock, S3, IAM. MLOps: MLflow, Kubeflow, Docker, GitHub Actions/GitLab CI. Generative AI: Prompt engineering, LLM fine-tuning, RAG, LangChain. Agentic AI: AutoGPT, BabyAGI, multi-agent orchestration. Data Engineering: SQL, PySpark, AWS Glue/EMR. Soft Skills Strong problem-solving and analytical thinking. Ability to explain complex AI concepts to non-technical stakeholders. What We’re Looking For Bachelor’s/Master’s in CS, AI, Data Science, or related field. 2-3 years of industry experience in AI/ML engineering. Portfolio of deployed ML/AI projects (GitHub, blog, case studies). Good to have an AWS Certified Machine Learning Specialty certification. Why Join Us? Innovative Projects: Work on cutting-edge AI applications that push the boundaries of technology. Collaborative Environment: Join a team of passionate engineers and researchers committed to excellence. Career Growth: Opportunities for professional development and advancement in the rapidly evolving field of AI. Equal opportunity employer CloudifyOps is proud to be an equal opportunity employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, color, sex, religion, national origin, disability, pregnancy, marital status, sexual orientation, gender reassignment, veteran status, or other protected category.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bangalore,Karnataka,India Job ID 764972 Join our Team Ericsson’s R&D Data team is seeking a visionary and technically exceptional Principal Machine Learning Engineer to lead the design, development, and deployment of advanced ML systems at scale. This role sits at the strategic intersection of machine learning, data engineering, and cloud-native architecture—shaping the next generation of AI-driven services at Ericsson. As a senior technical leader, you will architect and guide the end-to-end ML lifecycle, from data strategy and engineering to large-scale model deployment and continuous optimization. You’ll partner closely with data engineers, software developers, and product stakeholders, while mentoring a high-performing team of ML engineers. Your work will help scale intelligent systems that power mission-critical R&D and network solutions. Key Responsibilities: Architect and implement scalable ML solutions, deeply integrated with robust and reliable data pipelines. Own the complete ML lifecycle: data ingestion, preprocessing, feature engineering, model design, training, evaluation, deployment, and monitoring. Design and optimize data architectures supporting both batch and streaming ML use cases. Collaborate with data engineering teams to build real-time and batch pipelines using managed streaming platforms such as Kafka or equivalent technologies. Guide the development and automation of ML workflows using modern MLOps and CI/CD practices. Mentor and lead ML engineers, establishing engineering best practices and fostering a high-performance, collaborative culture. Align ML efforts with business objectives by working cross-functionally with data scientists, engineers, and product managers. Stay current with the latest ML and data engineering advancements, integrating emerging tools and frameworks into scalable, production-ready systems. Champion responsible AI practices including model governance, explainability, fairness, and compliance. Required Qualifications: 8+ years of experience in machine learning, applied AI, or data science with a proven record of delivering ML systems at scale. 3+ years of experience in data engineering or building ML-supportive data infrastructure and pipelines. Advanced degree (MS or PhD preferred) in Computer Science, Data Engineering, Machine Learning, or a related technical field. Proficient in Python (preferred), with experience in Java or C++ for backend or performance-critical tasks. Deep expertise with ML frameworks (TensorFlow, PyTorch, JAX) and cloud platforms, especially AWS (SageMaker, Lambda, Step Functions, S3, etc.). Experience with managed streaming data platforms such as Amazon MSK, or similar technologies for real-time ML data pipelines. Experience with distributed systems and data processing tools such as Spark, Airflow, and AWS Glue. Fluency in MLOps best practices, including CI/CD, model versioning, observability, and automated retraining pipelines. Strong leadership skills with experience mentoring engineers and influencing technical direction across teams. Excellent collaboration and communication skills, with the ability to align ML strategy with product and business needs.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Greater Chennai Area

Remote

Do you want to make a global impact on patient health? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team leads engineering efforts to advance AI and data science applications from POCs and prototypes to full production. As a Senior Manager, AI and Analytics Data Engineer, you will be part of a global team responsible for designing, developing, and implementing robust data layers that support data scientists and key advanced analytics/AI/ML business solutions. You will partner with cross-functional data scientists and Digital leaders to ensure efficient and reliable data flow across the organization. You will lead development of data solutions to support our data science community and drive data-centric decision-making. Join our diverse team in making an impact on patient health through the application of cutting-edge technology and collaboration. Role Responsibilities Lead development of data engineering processes to support data scientists and analytics/AI solutions, ensuring data quality, reliability, and efficiency As a data engineering tech lead, enforce best practices, standards, and documentation to ensure consistency and scalability, and facilitate related trainings Provide strategic and technical input on the AI ecosystem including platform evolution, vendor scan, and new capability development Act as a subject matter expert for data engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for data engineering needs Train and guide junior developers on concepts such as data modeling, database architecture, data pipeline management, data ops and automation, tools, and best practices Stay updated with the latest advancements in data engineering technologies and tools and evaluate their applicability for improving our data engineering capabilities Direct data engineering research to advance design and development capabilities Collaborate with stakeholders to understand data requirements and address them with data solutions Partner with the AIDA Data and Platforms teams to enforce best practices for data engineering and data solutions Demonstrate a proactive approach to identifying and resolving potential system issues. Communicate the value of reusable data components to end-user functions (e.g., Commercial, Research and Development, and Global Supply) and promote innovative, scalable data engineering approaches to accelerate data science and AI work Basic Qualifications Bachelor's degree in computer science, information technology, software engineering, or a related field (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering, or a related discipline). 7+ years of hands-on experience in working with SQL, Python, object-oriented scripting languages (e.g. Java, C++, etc..) in building data pipelines and processes. Proficiency in SQL programming, including the ability to create and debug stored procedures, functions, and views. Recognized by peers as an expert in data engineering with deep expertise in data modeling, data governance, and data pipeline management principles In-depth knowledge of modern data engineering frameworks and tools such as Snowflake, Redshift, Spark, Airflow, Hadoop, Kafka, and related technologies Experience working in a cloud-based analytics ecosystem (AWS, Snowflake, etc.) Familiarity with machine learning and AI technologies and their integration with data engineering pipelines Demonstrated experience interfacing with internal and external teams to develop innovative data solutions Strong understanding of Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone. Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems, or a related discipline (preferred, but not required) Experience in software/product engineering Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platforms Familiarity with containerization technologies like Docker and orchestration platforms like Kubernetes. Experience working effectively in a distributed remote team environment Hands on experience working in Agile teams, processes, and practices Expertise in cloud platforms such as AWS, Azure or GCP. Proficiency in using version control systems like Git. Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Ability to work non-traditional work hours interacting with global teams spanning across the different regions (e.g.: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies