Jobs
Interviews

243 Aws Sagemaker Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

11 - 16 Lacs

Hyderabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 12 Lead Agentic AI Developer LocationGurgaon, Hyderabad and Bangalore : A Lead Agentic AI Developer will drive the design, development, and deployment of autonomous AI systems that enable intelligent, self-directed decision-making. Their day-to-day operations focus on advancing AI capabilities, leading teams, and ensuring ethical, scalable implementations. Responsibilities AI System Design and Development Architect and build autonomous AI systems that integrate with enterprise workflows, cloud platforms, and LLM frameworks. Develop APIs, agents, and pipelines to enable dynamic, context-aware AI decision-making. Team Leadership and Mentorship Lead cross-functional teams of AI engineers, data scientists, and developers. Mentor junior staff in agentic AI principles, reinforcement learning, and ethical AI governance. Customization and Advancement Optimize autonomous AI models for domain-specific tasks (e.g., real-time analytics, adaptive automation). Fine-tune LLMs, multi-agent frameworks, and feedback loops to align with business goals. Ethical AI Governance Monitor AI behavior, audit decision-making processes, and implement safeguards to ensure transparency, fairness, and compliance with regulatory standards. Innovation and Research Spearhead R&D initiatives to advance agentic AI capabilities. Experiment with emerging frameworks (e.g.,Autogen, AutoGPT, LangChain), neuro-symbolic architectures, and self-improving AI systems. Documentation and Thought Leadership Publish technical white papers, case studies, and best practices for autonomous AI. Share insights at conferences and contribute to open-source AI communities. System Validation Oversee rigorous testing of AI agents, including stress testing, adversarial scenario simulations, and bias mitigation. Validate alignment with ethical and performance benchmarks. Stakeholder Leadership Collaborate with executives, product teams, and compliance officers to align AI initiatives with strategic objectives. Advocate for AI-driven innovation across the organization. What Were Looking For : REQUIRED S/QUALIFICATIONS Technical Expertise : 8+ years as a Senior AI Engineer , ML Architect , or AI Solutions Lead , with 5+ years focused on autonomous/agentic AI systems (e.g., multi-agent frameworks, self-optimizing systems, or LLM-driven decision engines). Expertise in Python (mandatory) and familiarity with Node.js . Hands-on experience with autonomous AI tools LangChain, Autogen, CrewAI, or custom agentic frameworks. Proficiency in cloud platforms AWS SageMaker (most preferred), Azure ML, or Google Cloud Vertex AI. Experience with MLOps pipelines (e.g., Kubeflow, MLflow) and scalable deployment of AI agents. Leadership Proven track record of leading AI/ML teams, managing complex projects, and mentoring technical staff. Ethical AI Familiarity with AI governance frameworks (e.g., EU AI Act, NIST AI RMF) and bias mitigation techniques. Communication Exceptional ability to translate technical AI concepts for non-technical stakeholders. Nice to have : Contributions to AI research (published papers, patents) or open-source AI projects (e.g., TensorFlow Agents, AutoGen). Experience with DevOps/MLOps toolsKubeflow, MLflow, Docker, or Terraform. Expertise in NLP, computer vision, or graph-based AI systems. Familiarity with quantum computing or neuromorphic architectures for AI. Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 10 - Officials or Managers (EEO-2 Job Categories-United States of America), IFTECH103.2 - Middle Management Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)

Posted 1 month ago

Apply

5.0 - 7.0 years

5 - 7 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Job Summary Greetings from Sight Spectrum Technologies!!! We would like to ensure that you are interested in this position. Experience :5+Years Location: Chennai, Bangalore, Hyderabad, Coimbatore Description: Strong experience in ETL development, data modeling, and managing data in large-scale environments. Proficient in AWS services including SageMaker, S3, Glue, Lambda, and CloudFormation/Terraform. - Hands-on expertise withMLOpsbest practices, including model versioning, monitoring, and CI/CD for ML pipelines. Proficiency in Python and SQL; experience with Java is a plus for streaming jobs. Deep understanding of cloud infrastructure automation using Terraform or similar IaC tools. Excellent problem-solving skills with the ability to troubleshoot data processing and deployment issues. Experience in fast-paced, agile development environments with frequent delivery cycles.- Strong communication and collaboration skills to work effectively across cross-functional teams

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Description : Strong experience in ETL development, data modeling, and managing data in large-scale environments. - Proficient in AWS services including SageMaker, S3, Glue, Lambda, and CloudFormation/Terraform. - Hands-on expertise with MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. - Proficiency in Python and SQL; experience with Java is a plus for streaming jobs. - Deep understanding of cloud infrastructure automation using Terraform or similar IaC tools. - Excellent problem-solving skills with the ability to troubleshoot data processing and deployment issues. - Experience in fast-paced, agile development environments with frequent delivery cycles. - Strong communication and collaboration skills to work effectively across cross-functional team Role & responsibilities

Posted 1 month ago

Apply

5.0 - 7.0 years

0 - 0 Lacs

Kolkata, Mumbai (All Areas)

Work from Office

Key Responsibility: End-to-End Model Development Design, train, and deploy machine learning models (supervised/unsupervised learning, NLP, time-series forecasting) using Python (scikit-learn, TensorFlow, PyTorch) and R for specialized statistical analysis. Optimize models for computational efficiency via parallelization, quantization, and cloud-native architectures (AWS SageMaker, GCP Vertex AI). Data Engineering & Pipeline Automation Build robust ETL/ELT pipelines for structured/unstructured data using SQL, Spark, and Apache Airflow. Implement feature engineering workflows and ensure data quality for large-scale datasets (1M+ records). Cross-Functional Collaboration Partner with product, risk, and engineering teams to operationalize models into business applications (e.g., credit risk systems, customer segmentation tools). Mentor junior engineers and lead code reviews to maintain best practices in MLops workflows. Model Governance & Compliance Develop monitoring frameworks for model drift, bias detection, and performance degradation. Ensure compliance with regulatory standards and internal governance policies. Innovation & Research Stay ahead of industry trends (e.g., LLMs, generative AI) and prototype solutions using no-code platforms. Publish internal white papers on novel methodologies and present findings to stakeholders. Technical Requirements Core Skills: Python (expert), R (intermediate), SQL, Apache Spark, Apache Arrow, Pydantic Machine Learning: Regression, classification, clustering, deep learning (CNNs, RNNs) Frameworks: scikit-learn, PyTorch, TensorFlow, Hugging Face, MLflow Cloud: AWS/GCP/Azure or any cloud platform, Docker, Kubernetes Domain Expertise (Preferred): Financial services (PD/LGD modelling, ECL calculations, risk analytics) Experience with no-code platforms is a bonus Knowledge of regulatory frameworks (Basel III, IFRS 9) Soft Skills & Culture Fit Ownership Mindset: Take end-to-end responsibility for projects, from ideation to deployment. Collaboration: Excel in cross-functional teams with clear communication to technical and non-technical stakeholders. Innovation: Continuously explore emerging tools (e.g., LLMs, AutoML) to solve business challenges.

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Must have: 5+ years of experience in designing, developing, and deploying AI/ML solutions, with at least 3+ years focused on AWS AI/ML services. Deep hands-on experience with Amazon SageMaker for building, training, tuning, and deploying ML models. Proven ability to work with AWS data services, such as: Amazon S3 (data storage),AWS Glue or AWS Data Wrangler (data processing), Amazon Athena or Redshift (querying/analytics) Familiarity with AWS AI services, like:Amazon Rekognition (computer vision), Amazon Comprehend (NLP), Amazon Transcribe/Polly (speech), Amazon Lex (chatbots) Experience building end-to-end ML pipelines using AWS-native tools or integrating with tools like Step Functions, Lambda, and CloudWatch for automation and monitoring. Solid understanding of model versioning, deployment strategies (real-time, batch, A/B testing), and model monitoring on AWS. Proficiency in Python for ML model development and deployment. Good to have: Hands-on experience with MLOps practices using AWS tools (e.g., SageMaker Pipelines, Model Registry, CodePipeline, CloudFormation). Familiarity with data lake architecture and tools like AWS Lake Formation. AWS certifications (e.g., AWS Certified Machine Learning Specialty, Solutions Architect – Associate/Professional).Experience with Application performance tuning

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Strong understating of Python, ML concepts and frameworks, Fast API, Graph QL Experience in developing scalable APIs. Knowledge of AWS, preferred services are storage, EC2, Kubernetes Exposure of ML best practices, documentation and unit testing. ML Flow, AirFlow, ML pipeline creation, drift monitoring and control Experience in developing and deploying machine learning models in a production environment using CI/CD. Communicate with clients to understand requirements and ask right questions. Knowledge of Django and database design will be added advantage. Strong analytical and problem-solving skills. Standards : Model Deployment Standards Use standardized APIs (e.g., RESTful) to interface with models Implement model versioning and proper naming conventions Monitoring and Maintenance Schedule routine model retraining and monitoring Code Quality Standards Follow style guides (e.g., PEP 8 in Python) Write comprehensive debugging and tests

Posted 1 month ago

Apply

6.0 - 11.0 years

9 - 18 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Google Cloud Machine Learning Services Good to have skills : NA Minimum 5+ year(s) of experience is required Educational Qualification : BE Summary: As an AI/ML Engineer, you will be responsible for developing applications and systems that utilize AI tools and Cloud AI services. Your typical day will involve applying GenAI models, developing cloud or on-prem application pipelines, and ensuring production-ready quality. You will also work with deep learning, neural networks, chatbots, and image processing. Key Responsibilities : A: Demonstrate various Google specific Designs using effective POCs, as per client requirements. B: Identifying applicability of Google Cloud AI services to use cases with ability to project both the business and tech benefits. C: Design, build and productionizes ML models to solve business challenges using Google Cloud technologies D: Lead and guide team of data scientists E: Coordinating and collaborating with cross-functional teams Technical Experience : A: Min 3 + years of experience on GCP AWS ML B: Exposure to Google Gen AI Services C: Exposure to GCP and its Compute, Storage, Data, Network, Security services D: Expert programming skills in any one of Java, Python, Spark E: Knowledge of GKE, Kubeflow on GCP would be good to have G: Experience in Vertex AI for building and managing the ML models H: Experience in implementing MLOps Additional Information: - The candidate should have a minimum of 5 +years of experience in Google Cloud Machine Learning Services.

Posted 1 month ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Chennai, Bengaluru

Work from Office

Job Summary: We are seeking a visionary and hands-on AI Architect with 8+ years of experience in designing and deploying AI-driven systems, including Generative AI (GenAI), LLM integration, computer vision, and agentic workflows. This role demands deep technical expertise in transformer models, PyTorch, and tools such as Stable Diffusion, ControlNet, and LangChain, as well as a strong grasp of dataset strategy, bias mitigation, and scalable AI system architecture. Experience with e-commerce or fashion datasets is a strong plus. Key Responsibilities: Architect and lead the development of AI/ML systems, with a focus on Generative AI, including diffusion models, GANs, and LLMs. Implement and optimize models such as Stable Diffusion, ControlNet, and OpenPose for real-world use cases in visual content generation. Design LLM-integrated solutions using LangChain, agentic workflows, and multimodal AI systems. Collaborate with cross-functional teams to define dataset strategy, ensuring data relevance, diversity, and quality. Integrate Hugging Face Transformers and PyTorch-based architectures into production environments. Define and implement best practices around bias & fairness, model explainability, and ethical AI design. Lead architectural decisions for scalable AI platforms, focusing on performance, reliability, and continuous learning capabilities. Leverage domain knowledge in e-commerce/fashion for visual understanding, recommendation, or personalization solutions. Qualifications: 8+ years of experience in AI/ML system architecture, including 3+ years specifically in Generative AI and LLM ecosystems. Proficiency in PyTorch, Transformers, Hugging Face, and LangChain. Deep understanding of diffusion models, GANs, ControlNet, and OpenPose. Experience in designing agentic AI workflows, real-time inference systems, and production-grade AI infrastructure. Proven track record in dataset creation/curation, bias analysis, and fairness optimization. Strong communication skills and the ability to translate complex AI concepts into business impact. Preferred Qualifications : Prior experience with e-commerce, fashion technology, or retail-based AI datasets. Contributions to open-source AI frameworks or publications in relevant research areas. Familiarity with cloud AI services (e.g., AWS Sagemaker, Azure AI, or GCP Vertex AI). Understanding of AI observability, versioning, and model governance. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 month ago

Apply

8.0 - 10.0 years

11 - 18 Lacs

Kanpur

Work from Office

Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.

Posted 1 month ago

Apply

8.0 - 10.0 years

11 - 18 Lacs

Hyderabad

Work from Office

Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.

Posted 1 month ago

Apply

15.0 - 20.0 years

9 - 14 Lacs

Hyderabad

Work from Office

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI/ML Engineer, you will develop applications and systems utilizing AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production-ready quality. You will apply GenAI models as part of the solution, including deep learning, neural networks, chatbots, and image processing. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Expected to provide solutions to problems that apply across multiple teams.- Lead the implementation of AI/ML models.- Conduct research on emerging AI technologies.- Optimize AI algorithms for performance and scalability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models.- Strong understanding of natural language processing techniques.- Experience with cloud AI services like AWS SageMaker or Google AI Platform.- Knowledge of deep learning frameworks such as TensorFlow or PyTorch.- Hands-on experience in developing AI applications.- Familiarity with deploying AI models in production environments. Additional Information:- The candidate should have a minimum of 12 years of experience in Large Language Models.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

7.0 - 9.0 years

19 - 25 Lacs

Bengaluru

Work from Office

Job Title: Industry & Function AI Decision Science Manager + S&C GN Management Level:07 - Manager Location: Primary Bengaluru, Secondary Gurugram Must-Have Skills: Consumer Goods & Services domain expertise , AI & ML, Proficiency in Python, R, PySpark, SQL , Experience in cloud platforms (Azure, AWS, GCP) , Expertise in Revenue Growth Management, Pricing Analytics, Promotion Analytics, PPA/Portfolio Optimization, Trade Investment Optimization. Good-to-Have Skills: Experience with Large Language Models (LLMs) like ChatGPT, Llama 2, or Claude 2 , Familiarity with optimization methods, advanced visualization tools (Power BI, Tableau), and Time Series Forecasting Job Summary : As a Decision Science Manager , you will lead the design and delivery of AI solutions in the Consumer Goods & Services domain. This role involves working closely with clients to provide advanced analytics and AI-driven strategies that deliver measurable business outcomes. Your expertise in analytics, problem-solving, and team leadership will help drive innovation and value for the organization. Roles & Responsibilities: Analyze extensive datasets and derive actionable insights for Consumer Goods data sources (e.g., Nielsen, IRI, EPOS, TPM). Evaluate AI and analytics maturity in the Consumer Goods sector and develop data-driven solutions. Design and implement AI-based strategies to deliver significant client benefits. Employ structured problem-solving methodologies to address complex business challenges. Lead data science initiatives, mentor team members, and contribute to thought leadership. Foster strong client relationships and act as a key liaison for project delivery. Build and deploy advanced analytics solutions using Accentures platforms and tools. Apply technical proficiency in Python, Pyspark, R, SQL, and cloud technologies for solution deployment. Develop compelling data-driven narratives for stakeholder engagement. Collaborate with internal teams to innovate, drive sales, and build new capabilities. Drive insights in critical Consumer Goods domains such as Revenue Growth Management Pricing Analytics and Pricing Optimization Promotion Analytics and Promotion Optimization SKU Rationalization/ Portfolio Optimization Price Pack Architecture Decomposition Models Time Series Forecasting Professional & Technical Skills: Proficiency in AI and analytics solutions (descriptive, diagnostic, predictive, prescriptive, generative). Expertise in delivering large scale projects/programs for Consumer Goods clients on Revenue Growth Management - Pricing Analytics, Promotion Analytics, Portfolio Optimization, etc. Deep and clear understanding of typical data sources used in RGM programs POS, Syndicated, Shipment, Finance, Promotion Calendar, etc. Strong programming skills in Python, R, PySpark, SQL, and experience with cloud platforms (Azure, AWS, GCP) and proficient in using services like Databricks and Sagemaker. Deep knowledge of traditional and advanced machine learning techniques, including deep learning. Experience with optimization techniques (linear, nonlinear, evolutionary methods). Familiarity with visualization tools like Power BI, Tableau. Experience with Large Language Models (LLMs) like ChatGPT, Llama 2. Certifications in Data Science or related fields. Additional Information: The ideal candidate has a strong educational background in data science and a proven track record in delivering impactful AI solutions in the Consumer Goods sector. This position offers opportunities to lead innovative projects and collaborate with global teams. Join Accenture to leverage cutting-edge technologies and deliver transformative business outcomes. About Our Company | AccentureQualification Experience: Minimum 7-9 years of experience in data science, particularly in the Consumer Goods sector Educational Qualification: Bachelors or Masters degree in Statistics, Economics, Mathematics, Computer Science, or MBA (Data Science specialization preferred)

Posted 1 month ago

Apply

8.0 - 10.0 years

11 - 18 Lacs

Pune

Work from Office

Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.

Posted 1 month ago

Apply

8.0 - 10.0 years

11 - 18 Lacs

Mumbai

Work from Office

Role Summary : We are seeking a highly skilled Senior Data Science Consultant with 8+ years of experience to lead an internal optimization initiative. The ideal candidate should have a strong background in data science, operations research, and mathematical optimization, with a proven track record of applying these skills to solve complex business problems. This role requires a blend of technical depth, business acumen, and collaborative communication. A background in internal efficiency/operations improvement or cost/resource optimization projects is highly desirable. Key Responsibilities : - Lead and contribute to internal optimization-focused data science projects from design to deployment. - Develop and implement mathematical models to optimize resource allocation, process performance, and decision-making. - Use techniques such as linear programming, mixed-integer programming, heuristic and metaheuristic algorithms. - Collaborate with business stakeholders to gather requirements and translate them into data science use cases. - Build robust data pipelines and use statistical and machine learning methods to drive insights. - Communicate complex technical findings in a clear, concise manner to both technical and non-technical audiences. - Mentor junior team members and contribute to knowledge sharing and best practices within the team. Required Skills And Qualifications : - Masters or PhD in Data Science, Computer Science, Operations Research, Applied Mathematics, or related fields. - Minimum 8 years of relevant experience in data science, with a strong focus on optimization. - Expertise in Python (NumPy, Pandas, SciPy, Scikit-learn), SQL, and optimization libraries such as PuLP, Pyomo, Gurobi, or CPLEX. - Experience with end-to-end lifecycle of internal optimization projects. - Strong analytical and problem-solving skills. - Excellent communication and stakeholder management abilities. Preferred Qualifications : - Experience working on internal company projects focused on logistics, resource planning, workforce optimization, or cost reduction. - Exposure to tools/platforms like Databricks, Azure ML, or AWS SageMaker. - Familiarity with dashboards and visualization tools like Power BI or Tableau. - Prior experience in consulting or internal centers of excellence (CoE) is a plus.

Posted 1 month ago

Apply

8.0 - 10.0 years

15 - 18 Lacs

Vadodara

Work from Office

Location : Vadodara (Onsite only) Job Summary: We are looking for a highly skilled and experienced AI/ML Engineer with 8 to 10 years of relevant experience in developing end-to-end machine learning solutions. The ideal candidate will be well-versed in deep learning, natural language processing (NLP), computer vision, and large language models (LLMs). You will be responsible for designing, developing, and deploying scalable AI models tailored to real-world business use cases. Key Responsibilities: Design, train, evaluate, and deploy machine learning and deep learning models aligned with business goals. Work with various ML tasks such as classification, regression, and clustering. Handle NLP, LLMs, computer vision, and speech-to-text use cases. Implement and fine-tune models like LLaMA, Falcon, BERT, T5 Transformer, and Hugging Face models. Use libraries such as NumPy, Pandas, Matplotlib, SpaCy, Scikit-learn, TensorFlow, and PyTorch. Write clean, efficient Python code for data processing and model development. Utilize tools like Google Colab, Jupyter Notebook, and AWS SageMaker for experimentation and deployment. Monitor data drift and automate model retraining pipelines as required. Deploy AI models on cloud infrastructure (preferably AWS) using services like SageMaker, EC2, ECS, and Kubernetes. Collaborate with cross-functional teams to translate complex problems into AI solutions. Good to Have: Experience working with LangChain or LLaMAIndex . Familiarity with RAG-based (Retrieval-Augmented Generation) application development. Hands-on experience with Google Cloud Platform (GCP) for model deployment.

Posted 1 month ago

Apply

2.0 - 7.0 years

15 - 20 Lacs

Hyderabad

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. General Summary Preferred Qualifications 3+ years of experience as a Data Engineer or in a similar role Experience with data modeling, data warehousing, and building ETL pipelines Solid working experience with Python, AWS analytical technologies and related resources (Glue, Athena, QuickSight, SageMaker, etc.,) Experience with Big Data tools , platforms and architecture with solid working experience with SQL Experience working in a very large data warehousing environment, Distributed System. Solid understanding on various data exchange formats and complexities Industry experience in software development, data engineering, business intelligence, data science, or related field with a track record of manipulating, processing, and extracting value from large datasets Strong data visualization skills Basic understanding of Machine Learning; Prior experience in ML Engineering a plus Ability to manage on-premises data and make it inter-operate with AWS based pipelines Ability to interface with Wireless Systems/SW engineers and understand the Wireless ML domain; Prior experience in Wireless (5G) domain a plus Education Bachelor's degree in computer science, engineering, mathematics, or a related technical discipline Preferred QualificationsMasters in CS/ECE with a Data Science / ML Specialization Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field OR PhD in Engineering, Information Systems, Computer Science, or related field. 3+ years of experience with Programming Language such as C, C++, Java, Python, etc. Develops, creates, and modifies general computer applications software or specialized utility programs. Analyzes user needs and develops software solutions. Designs software or customizes software for client use with the aim of optimizing operational efficiency. May analyze and design databases within an application area, working individually or coordinating database development as part of a team. Modifies existing software to correct errors, allow it to adapt to new hardware, or to improve its performance. Analyzes user needs and software requirements to determine feasibility of design within time and cost constraints. Confers with systems analysts, engineers, programmers and others to design system and to obtain information on project limitations and capabilities, performance requirements and interfaces. Stores, retrieves, and manipulates data for analysis of system capabilities and requirements. Designs, develops, and modifies software systems, using scientific analysis and mathematical models to predict and measure outcome and consequences of design. Principal Duties and Responsibilities: Completes assigned coding tasks to specifications on time without significant errors or bugs. Adapts to changes and setbacks in order to manage pressure and meet deadlines. Collaborates with others inside project team to accomplish project objectives. Communicates with project lead to provide status and information about impending obstacles. Quickly resolves complex software issues and bugs. Gathers, integrates, and interprets information specific to a module or sub-block of code from a variety of sources in order to troubleshoot issues and find solutions. Seeks others' opinions and shares own opinions with others about ways in which a problem can be addressed differently. Participates in technical conversations with tech leads/managers. Anticipates and communicates issues with project team to maintain open communication. Makes decisions based on incomplete or changing specifications and obtains adequate resources needed to complete assigned tasks. Prioritizes project deadlines and deliverables with minimal supervision. Resolves straightforward technical issues and escalates more complex technical issues to an appropriate party (e.g., project lead, colleagues). Writes readable code for large features or significant bug fixes to support collaboration with other engineers. Determines which work tasks are most important for self and junior engineers, stays focused, and deals with setbacks in a timely manner. Unit tests own code to verify the stability and functionality of a feature.

Posted 1 month ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

Bengaluru

Work from Office

AI/ML Architect Location : Bangalore (Hybrid) Experience : 1015 years Key Requirements: Experience & Education 10+ years in total, 8+ years in AI/ML development 3+ years in AI/ML architecture Bachelor’s/Master’s in CS, AI/ML, Engineering, or similar Core Technical Skills Strong in TensorFlow, PyTorch Expertise in time series modeling and computer vision (object detection, facial/intrusion/anomaly detection) Hands-on with MLOps tools: MLflow, TFX, Kubeflow, SageMaker Experience in cloud (AWS, Azure, GCP) and edge AI (Jetson, Coral, OpenVINO) Proficient with Docker, Kubernetes, CI/CD pipelines Architecture & Integration Designed hybrid edge-cloud AI systems Integrated ML models into IIoT platforms Implemented model fusion with sensor, visual, and 3rd party data Leadership & Collaboration Mentored cross-functional teams (ML, Data, DevOps) Ensured security, compliance, and production readiness of AI models Translated AI strategy into business-aligned solutions

Posted 1 month ago

Apply

2.0 - 5.0 years

6 - 15 Lacs

Pune

Remote

Location: Any / Remote / Work From Home Employment Type: Full-time Job Summary: We are seeking skilled and innovative AI Engineers with a minimum 2-4 years of experience in developing and deploying AI solutions . The ideal candidate should have practical knowledge in AWS , Amazon Bedrock , LLMs , Claude AI , and Custom AI Development . You will contribute to the design and implementation of cutting-edge Generative AI (GenAI) and Mainstream AI applications across diverse business domains. This is an excellent opportunity to work with emerging technologies in an agile, fast-paced environment with full ownership of AI engineering pipelines. Key Responsibilities: Develop custom machine learning models and fine-tune foundation models for enterprise use Design and implement LLM-based and Generative AI solutions tailored to client use cases Work with OpenAI, Anthropic, HuggingFace models and their APIs to build secure, scalable AI systems Perform prompt engineering and experiment with parameter tuning and context optimization Build APIs and backend services to integrate AI models into product workflows Collaborate with product, data, and DevOps teams to deliver end-to-end AI-powered solutions Ensure best practices for AI model governance, security, performance, and ethical use Stay current with advancements in LLMs, vector databases (e.g., Pinecone, FAISS), and GenAI tooling Document architecture, experiments, and processes clearly and effectively Participate in peer reviews, client meetings, and cross-functional planning discussions Required Qualifications: Minimum 2-4 years of professional experience in AI/ML , with specific focus on LLMs and GenAI Strong experience working on AWS Bedrock , Lambda, and related services Proficiency in Python , including experience with libraries like HuggingFace, LangChain, etc. Familiarity with Anthropic , OpenAI , or other foundation models in production settings Experience with custom AI model development and model fine-tuning Good understanding of cloud-native AI deployments and API integrations Ability to apply prompt engineering techniques to optimize LLM responses Exposure to vector databases and RAG pipelines Strong problem-solving, communication, and team collaboration skills Preferred Qualifications (STRONG PLUS): Hands-on experience with LangChain, RAG pipelines, or agent frameworks Familiarity with AWS Sagemaker, DynamoDB, Lambda, or other AI tooling in AWS Knowledge of Docker or Kubernetes for containerizing AI workloads Understanding of MLOps principles and tools (MLflow, SageMaker Pipelines, etc.) Contributions to open-source AI projects or published GenAI applications AI/ML certifications Familiarity with enterprise use cases in biotech, healthcare, or manufacturing What We Offer: Remote work option Competitive salary package Exciting projects in the field of LLMs, AI, and cloud-native development Continuous learning opportunities in Generative AI and cloud AI tools A collaborative and fast-paced environment with a forward-thinking team Access to state-of-the-art tools and flexible project ownership How to Apply: Please submit your resume and cover letter outlining your relevant experience and why you'd be a great fit for Arocom. We differentiate candidates based on professionalism. We admire the new age methods like a video resume. If you have one please share the URL along with your resume. Your privacy is important to us, so we will not be contacting candidates by phone. If your application is selected, we will email you a link to schedule your interview at your convenient time. Please check your emails regularly, including your SPAM folder. Arocom encourages work from home and has a Bring Your Own Device (BYOD) policy. Employees / Consultants working from home are expected to use their personal laptops/desktops for work-related tasks.

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 18 Lacs

Kolkata

Remote

Key Responsibilities: Design, develop, and implement AI-driven chatbots and IVAs to streamline customer interactions. Work on conversational AI platforms to create a seamless customer experience, with a focus on natural language processing (NLP), intent recognition, and sentiment analysis. Collaborate with cross-functional teams, including product managers and customer support, to translate business requirements into technical solutions. Build, train, and fine-tune machine learning models to enhance IVA capabilities and ensure high accuracy in responses. Continuously optimize models based on user feedback and data-driven insights to improve performance. Integrate IVA/chat solutions with internal systems such as CRM and backend databases. Ensure scalability, robustness, and security of IVA/chat solutions in compliance with industry standards. Participate in code reviews, testing, and deployment of AI solutions to ensure high quality and reliability. Required Skills and Qualifications: Bachelors or master’s degree in computer science, Data Science, AI/ML, or a related field. 3+ years of experience in developing IVA/chatbots, conversational AI, or similar AI-driven systems using AWS services Expert in using Amazon Lex, Amazon Polly, AWS lambda, AWS connect AWS Bedrock experience with Sage maker will have added advantage Solid understanding of API integration and experience working with RESTful services. Strong problem-solving skills, attention to detail, and ability to work independently and in a team. Excellent communication skills in English, both written and verbal. Preferred Qualifications: Experience in financial services or fintech projects. Knowledge of data security best practices and compliance requirements in the financial sector.

Posted 1 month ago

Apply

10.0 - 14.0 years

15 - 20 Lacs

Chennai, Delhi / NCR, Bengaluru

Hybrid

Your day at NTT DATA We are seeking an experienced Solution Architect/Business Development Manager with expertise in AI/ML to drive business growth and deliver innovative solutions. The successful candidate will be responsible for assessing client business requirements, designing technical solutions, recommending AI/ML approaches, and collaborating with delivery organizations to implement end-to-end solutions. What you'll be doing Key Responsibilities: Business Requirement Analysis: Assess client's business requirements and convert them into technical specifications that meet business outcomes. AI/ML Solution Design: Recommend the right AI/ML approaches to meet business requirements and design solutions that drive business value. Opportunity Sizing: Size the opportunity and develop business cases to secure new projects and grow existing relationships. Solution Delivery: Collaborate with delivery organizations to design end-to-end AI/ML solutions, ensuring timely and within-budget delivery. Costing and Pricing: Develop costing and pricing strategies for AI/ML solutions, ensuring competitiveness and profitability. Client Relationship Management: Build and maintain strong relationships with clients, understanding their business needs and identifying new opportunities. Technical Leadership: Provide technical leadership and guidance to delivery teams, ensuring solutions meet technical and business requirements. Knowledge Sharing: Share knowledge and expertise with the team, contributing to the development of best practices and staying up-to-date with industry trends. Collaboration: Work closely with cross-functional teams, including data science, engineering, and product management, to ensure successful project delivery. Requirements: Education: Master's degree in Computer Science, Engineering, or related field Experience: 10+ years of experience in AI/ML solution architecture, business development, or a related field Technical Skills: Strong technical expertise in AI/ML, including machine learning algorithms, deep learning, and natural language processing. Technical Skills: Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity Technical Skills: Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms Hyperscaler: Experience with cloud-based AI/ML platforms and tools (e.g., AWS SageMaker, Azure Machine Learning, Google Cloud AI Platform) Softskill: Excellent business acumen and understanding of business requirements and outcomes Softskill: Strong communication and interpersonal skills, with ability to work with clients and delivery teams Business Acumen: Experience with solution costing and pricing strategies with Strong analytical and problem-solving skills, with ability to think creatively and drive innovation Nice to Have: Experience with Agile development methodologies Knowledge of industry-specific AI/ML applications (e.g., healthcare, finance, retail) Certification in AI/ML or related field (e.g., AWS Certified Machine Learning Specialty)

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

New Delhi, Chennai, Bengaluru

Work from Office

Your day at NTT DATA Cloud AI/GenAI Engineer(ServiceNow) We are seeking a talented AI/GenAI Engineer to join our team in delivering cutting-edge AI solutions to clients. The successful candidate will be responsible for implementing, developing, and deploying AI/GenAI models and solutions on cloud platforms. This role requires knowledge of ServiceNow modules like CSM and virtual agent development. Candidate should have strong technical aptitude, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Cloud AI Implementation: Implement and deploy AI/GenAI models and solutions using various cloud platforms (e.g., AWS SageMaker, Azure ML, Google Vertex AI) and frameworks (e.g., TensorFlow, PyTorch, LangChain, Vellum). Build Virtual Agent in SN Design, develop and deploy virtual agents using SN agent builder Integrate SN Design and develop seamless integration of SN with other external AI systems Agentic AI: Assist in developing agentic AI systems on cloud platforms, enabling autonomous decision-making and action-taking capabilities in AI solutions. Cloud-Based Vector Databases: Implement cloud-native vector databases (e.g., Pinecone, Weaviate, Milvus) or cloud-managed services for efficient similarity search and retrieval in AI applications. Model Evaluation and Fine-tuning: Evaluate and optimize cloud-deployed generative models using metrics like perplexity, BLEU score, and ROUGE score, and fine-tune models using techniques like prompt engineering, instruction tuning, and transfer learning. Security for Cloud LLMs: Apply security practices for cloud-based LLMs, including data encryption, IAM policies, and network security configurations. Client Support Support client engagements by implementing AI requirements and contributing to solution delivery. Cloud Solution Implementation: Build scalable and efficient cloud-based AI/GenAI solutions according to architectural guidelines. Cloud Model Development: Develop and fine-tune AI/GenAI models using cloud services for specific use cases, such as natural language processing, computer vision, or predictive analytics. Testing and Validation Conduct testing and validation of cloud-deployed AI/GenAI models, including performance evaluation and bias detection. Deployment and Maintenance Deploy AI/GenAI models in production environments, ensuring seamless integration with existing systems and infrastructure. Cloud Deployment Deploy AI/GenAI models in cloud production environments and integrate with existing systems. Education Bachelor/Masters in Computer Science, AI, ML, or related fields. Experience 3-5 years of experience in engineering solutions, with a track record of delivering Cloud AI solutions. . Should have at least 2 years experience of SN and SN agent builder Technical Skills: Proficiency in cloud AI/GenAI services and technologies across major cloud providers (AWS, Azure, GCP) Experience with cloud-native vector databases and managed similarity search services Experience with SN modules like CSM and virtual agent builder Experience with security measures for cloud-based LLMs, including data encryption, access controls, and compliance requirements Programming Skills: Strong programming skills in languages like Python or R Cloud Platform Knowledge: Strong understanding of cloud platforms, their AI services, and best practices for deploying ML models in the cloud Communication Excellent communication and interpersonal skills, with the ability to work effectively with clients and internal teams. Problem-Solving Strong problem-solving skills, with the ability to analyse complex problems and develop creative solutions. Nice to have: Experience with serverless architectures for AI workloads Nice to have: Experience with ReactJS for rapid prototyping of cloud AI solution frontends Location: Delhi or Bangalore (with remote work options)

Posted 1 month ago

Apply

5.0 - 10.0 years

1 - 6 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Description: Strong experience in ETL development, data modeling, and managing data in large-scale environments. - Proficient in AWS services including SageMaker, S3, Glue, Lambda, and CloudFormation/Terraform. - Hands-on expertise with MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. - Proficiency in Python and SQL; experience with Java is a plus for streaming jobs. - Deep understanding of cloud infrastructure automation using Terraform or similar IaC tools. - Excellent problem-solving skills with the ability to troubleshoot data processing and deployment issues. - Experience in fast-paced, agile development environments with frequent delivery cycles. - Strong communication and collaboration skills to work effectively across cross-functional team Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

5.0 - 10.0 years

0 - 1 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Greetings from Sight Spectrum Technologies!!! We would like to ensure that you are interested in this position. Company: Sight Spectrum Technologies( https://sightspectrum.com/ ) Experience : 5+Years Location: Chennai, Bangalore, hyderabad, Coimbatore Description: Strong experience in ETL development, data modeling, and managing data in large-scale environments. - Proficient in AWS services including SageMaker, S3, Glue, Lambda, and CloudFormation/Terraform. - Hands-on expertise with MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. - Proficiency in Python and SQL; experience with Java is a plus for streaming jobs. - Deep understanding of cloud infrastructure automation using Terraform or similar IaC tools. - Excellent problem-solving skills with the ability to troubleshoot data processing and deployment issues. - Experience in fast-paced, agile development environments with frequent delivery cycles.- Strong communication and collaboration skills to work effectively across cross-functional teams Please fill the below details for reference. Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice Period (LWD): Current Location: Preferred Location: Payroll Company: Reason for change: Client Company: Offer Details: PF(Yes/No): UAN No: Linkdln Id: If interested kindly share your resume to roopavahini@sightspectrum.in

Posted 1 month ago

Apply

3.0 - 5.0 years

25 - 30 Lacs

Bengaluru

Work from Office

Requirements 3 to 5 years of handson experience with Machine Learning Proficiency in TensorFlow, PyTorch, or Scikit-learn Strong Python skills and familiarity with JavaScript or TypeScript Experience with Pandas, NumPy, SQL, NLP, and reinforcement learning Knowledge of crypto markets, trading strategies, and technical analysis Familiarity with cloud AI services like AWS SageMaker or Google AI Strong analytical and problem-solving skills Responsibilities Develop AI-driven trading bots for trend prediction and risk management Implement predictive analytics and personalized trading recommendations Train, optimize, and deploy machine learning models Work with developers to integrate AI into the trading platform Job Details Location: Bangalore Employment Type: Full-Time, Onsite Interview Process Initial screening with HR Technical Interview System Design Interview Final discussion with leadership Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Chennai

Work from Office

Experience in CI/CD pipelines, scripting languages, and a deep understanding of version control systems (e.g. Git), containerization (e.g. Docker), and continuous integration/deployment tools (e.g. Jenkins) third party integration is a plus, cloud computing platforms (e.g. AWS, GCP, Azure), Kubernetes and Kafka. Experience in 4+ years of experience building production-grade ML pipelines. Proficient in Python and frameworks like Tensorflow, Keras, or PyTorch. Experience with cloud build, deployment, and orchestration tools Experience with MLOps tools such as MLFlow, Kubeflow, Weights & Biases, AWS Sagemaker, Vertex AI, DVC, Airflow, Prefect, etc., Experience in statistical modeling, machine learning, data mining, and unstructured data analytics. Understanding of ML Lifecycle, MLOps & Hands on experience to Productionize the ML Model Detail-oriented, with the ability to work both independently and collaboratively. Ability to work successfully with multi-functional teams, principals, and architects, across organizational boundaries and geographies. Equal comfort driving low-level technical implementation and high-level architecture evolution Experience working with data engineering pipelines.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies