Home
Jobs
Companies
Resume

643 Sagemaker Jobs - Page 26

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2 - 6 years

25 - 40 Lacs

Bengaluru

Hybrid

Naukri logo

About Position As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. Role : AL ML Engineer Location: Bengaluru, Hyderabad Experience: 2-6 years What You'll Do Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. What we're looking for: 2-6 Years Demonstrated proficiency in software engineering design practices. Bachelor's or advanced degree in Computer Science, Engineering, Mathematics, or a related field. Advanced degree (Master's or Ph.D.) preferred. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, Tensorflow, HuggingFace Transformers, and SpaCy. Preferred experience developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains. Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards, Annual health, check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Note: We are Preferring candidates from premium institutes and product firms

Posted 2 months ago

Apply

2 - 6 years

25 - 40 Lacs

Hyderabad

Hybrid

Naukri logo

About Position As a crucial member of our team, you'll play a pivotal role across the entire machine learning lifecycle, contributing to our conversational AI bots, RAG system and traditional ML problem solving for our observability platform. Your tasks will encompass both operational and engineering aspects, including building production-ready inference pipelines, deploying and versioning models, and implementing continuous validation processes. On the LLM side you'll fine-tune generative AI models, design agentic language chains, and prototype recommender system experiments. Role : AL ML Engineer Location: Bengaluru, Hyderabad Experience: 2-6 years What You'll Do Fine-tuning generative AI models to enhance performance. Designing AI Agents for conversational AI applications. Experimenting with new techniques to develop models for observability use cases Building and maintaining inference pipelines for efficient model deployment. Managing deployment and model versioning pipelines for seamless updates. Developing tooling to continuously validate models in production environments. What we're looking for: 2-6 Years Demonstrated proficiency in software engineering design practices. Bachelor's or advanced degree in Computer Science, Engineering, Mathematics, or a related field. Advanced degree (Master's or Ph.D.) preferred. Experience working with transformer models and text embeddings. Proven track record of deploying and managing ML models in production environments. Familiarity with common ML/NLP libraries such as PyTorch, Tensorflow, HuggingFace Transformers, and SpaCy. Preferred experience developing production-grade applications in Python. Proficiency in Kubernetes and containers. Familiarity with concepts/libraries such as sklearn, kubeflow, argo, and seldon. Expertise in Python, C++, Kotlin, or similar programming languages. Experience designing, developing, and testing scalable distributed systems. Familiarity with message broker systems (e.g., Kafka, RabbitMQ). Knowledge of application instrumentation and monitoring practices. Experience with ML workflow management, like AirFlow, Sagemaker, etc. Familiarity with the AWS ecosystem. Past projects involving the construction of agentic language chains. Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards, Annual health, check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Note: We are Preferring candidates from premium institutes and product firms

Posted 2 months ago

Apply

2 - 3 years

6 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for an enthusiastic RPA & Intelligent Automation professional to join the ranks of our newly founded CoE. Help us meet increasing demand from the business, support our rapidly growing portfolio of automation and make an impact across every business area at Booking.com. We look at our team as a service provider for the entire company, operating with a large degree of autonomy and enterpreneurship . B.Responsible Naturally oriented towards improving efficiencies. Seeking accountability from themselves and others. Compassionate collaborator with a deep sense of comradery. Willingness to be cross-functional, pick up new skills and cover new ground with/for the team. Striving for continuous improvement and high quality in their work. Strong work ethic and high spirit. Keen to understand and solve real world problems through technology. B.Skilled 2-3 years of experience developing in Blue Prism CS, Engineering or similar university background is a MUST HAVE Blue Prism certification is a MUST HAVE Knowledge of Blue Prisms architectural/infrastructure components Proficiency in core Python libraries like pandas,NumPy etc Exposure to AI/ML frameworks like Tensorflow,PyTorch,scikit learn etc Understanding of NLP techniques like text summarization,sentiment analysis,Named entity recognition etc is good to have In-depth understanding of AWS components RDS, EC2, S3, IAM, CloudWatch, Lambda,Sagemaker, VPC is good to have Experience with VAULT, PASSPORT, Gitlab for UAM / Config Management Exposure to Terraform code for deploying AWS services is good to have Professional experience with SQL, .NET, C#, HTTP APIs and Web Services Experience designing, developing, deploying and maintaining software Experience working in a scrum/agile environment Excellent communication skills in English

Posted 2 months ago

Apply

5 - 7 years

9 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for an enthusiastic RPA & Intelligent Automation professional to join the ranks of our newly founded CoE. Help us meet increasing demand from the business, support our rapidly growing portfolio of automation and make an impact across every business area at Booking.com. We look at our team as a service provider for the entire company, operating with a large degree of autonomy and enterpreneurship . B.Responsible Naturally oriented towards improving efficiencies. Seeking accountability from themselves and others. Compassionate collaborator with a deep sense of comradery. Willingness to be cross-functional, pick up new skills and cover new ground with/for the team. Striving for continuous improvement and high quality in their work. Strong work ethic and high spirit. Keen to understand and solve real world problems through technology. B.Skilled 5+ years of experience developing in Blue Prism CS, Engineering or similar university background is a MUST HAVE Blue Prism certification is a MUST HAVE Knowledge of Blue Prisms architectural/infrastructure components Proficiency in core Python libraries like pandas,NumPy etc Exposure to AI/ML frameworks like Tensorflow,PyTorch,scikit learn etc Understanding of NLP techniques like text summarization,sentiment analysis,Named entity recognition etc is good to have In-depth understanding of AWS components RDS, EC2, S3, IAM, CloudWatch, Lambda,Sagemaker, VPC is good to have Experience with VAULT, PASSPORT, Gitlab for UAM / Config Management Exposure to Terraform code for deploying AWS services is good to have Professional experience with SQL, .NET, C#, HTTP APIs and Web Services Experience designing, developing, deploying and maintaining software Experience working in a scrum/agile environment Excellent communication skills in English In return, well provide: Be part of a fast paced environment and performance driven culture Various opportunities to grow technically and personally via side projects, hackathons, conferences and your involvement in the community; Contributing to a high scale, complex, world renowned product and seeing real time impact of your work on millions of travelers worldwide; Opportunity to utilize technical expertise, leadership capabilities and entrepreneurial spirit; Technical, behavioral and interpersonal competence advancement via on-the-job opportunities, experimental projects.

Posted 2 months ago

Apply

5 - 10 years

50 - 55 Lacs

Bengaluru

Hybrid

Naukri logo

WHAT YOULL DO As a member of the team, you will be responsible for developing, testing, and deploying data-driven software products in AWS Cloud. You will work with a team consisting of a data scientist, an enterprise architect, data engineers and business users to enhance products feature sets. Qualification & Skills: Mandatory: Knowledge and experience in audit, expense and firm level data elements Knowledge and experience in audit and compliance products/processes End-to-end product development lifecycle knowledge/exposure Strong in AWS Cloud and associated services like Elastic Beanstalk, SageMaker, EFS, S3, IAM, Glue, Lambda, SQS, SNS, KMS, Encryption, Secret Manager Strong experience in Snowflake Database Operations Strong in SQL and Python Programming Language Strong experience in Web Development Framework (Django) Strong experience in React and associated framework (Next JS, Tailwind etc) Experience in CI/CD pipelines and DevOps methodology Experience in SonarQube integration and best practices Implementation of best security implementation for web applications and cloud infrastructure Knowledge of Wiz.io for security protocols related to AWS Cloud Platform Nice to Have: Knowledge of data architecture, data modeling, best practices and security policies in the Data Management space Basic data science knowledge preferred Experience in KNIME/Tableau/PowerBI Experience & Education: Between 5 to 15 years of IT experience Bachelors/masters degree from an accredited college/university in business related or technology related field

Posted 2 months ago

Apply

9 - 12 years

11 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

About us: Working at Target means helping all families discover the joy of everyday life. We bring that vision to life through our values and culture. . We are building Machine Learning Platform to enable MLOPs capabilities to help Data scientists and ML engineers at Target to implement ML solutions at scale. It encompasses building the Featurestore, Model ops, experimentation, iteration, monitoring, explainability, and continuous improvement of the machine learning lifecycle. You will be part of a team building scalable applications by leverage latest technologies. Connect with us if you want to join us in this exiting journey. Roles and responsibilities: Build and maintain Machine learning infrastructure that is scalable, reliable and efficient. Familiar with Google cloud infrastructure and MLOPS Write highly scalable APIs. Deploy and maintain machine learning models, pipelines and workflows in production environment. Collaborate with data scientists and software engineers to design and implement machine learning workflows. Implement monitoring and logging tools to ensure that machine learning models are performing optimally. Continuously improve the performance, scalability and reliability of machine learning systems. Work with teams to deploy and manage infrastructure for machine learning services. Create and maintain technical documentation for machine learning infrastructure and workflows. Stay up to date with the latest developments in technologies. Tech stack: GCP cloud skills, Machine learning engineer skills, Python, Microservices, API development Cassandra, Elastic Search, Postgres, Kafka, Docker, CICD, Addon (Java + Spring boot) Required Skills: Bachelor's or Master's degree in computer science, engineering or related field. 9+ years of experience in software development, machine learning engineering. Deep experience with Python, API development, microservices. Good to have skills in implementing end to end engineering applications using JVM languages. Prefer experience with Java, Spring boot. Expert in building high-performance APIs. Experience with DevOps practices and tools such as Kubernetes, Docker, Jenkins, Git. Experience with GCP ML Ops is required. Good to have some understanding of machine learning concepts and frameworks, deep learning, LLM etc. Good to have some experience with MLOps platforms such as Kubeflow, MLFlow, Sagemaker etc. Good to have experience in deploying machine learning models in a production environment. Good to have experience with data streaming technologies such as Kafka, Kinesis, etc. Strong analytical and problem-solving skills

Posted 2 months ago

Apply

4 - 6 years

25 - 30 Lacs

Bengaluru

Work from Office

Naukri logo

3+ years of work experience in Python programming for AI/ML, deep learning, and Generative AI model development Proficiency in TensorFlow/PyTorch, Hugging Face Transformers and Langchain libraries Hands-on experience with NLP, LLM prompt design and fine-tuning, embeddings, vector databases and agentic frameworks Strong understanding of ML algorithms, probability and optimization techniques 6+ years of experience in deploying models with Docker, Kubernetes, and cloud services (AWS Bedrock, SageMaker, GCP Vertex AI) through APIs, and using MLOps and CI/CD pipelines Familiarity with retrieval-augmented generation (RAG), cache-augmented generation (CAG), retrieval-integrated generation (RIG), low-rank adaptation (LoRA) fine-tuning Ability to write scalable, production-ready ML code and optimized model inference Experience with developing ML pipelines for text classification, summarization and chat agents Prior experience with SQL and noSQL databases, and Snowflake/Databricks

Posted 2 months ago

Apply

5 - 7 years

8 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

Position Overview: The Data Platform and Analytics Services (DPaAS) team in Finance IT is looking for a Dev Ops Senior Analyst to provide their contribution and guidance for shared cloud infrastructure/data platform for Corporate Applications and US Market Solutions. This is a key growth area for the Finance organization and will lay a strong foundation for the cloud-based data platform to enable accurate and timely insights for the stakeholders to make informed and strategic decisions. The DevOps Analyst will be responsible for building a shared framework of cloud services as well as related tools and processes. This will enable the build out of new data platforms or the enhancement of existing data platform. This will be done by leveraging AWS and/or Oracle cloud. The ideal candidate will have experience in design, development and automation of scalable cloud infrastructure supporting data workloads. The Senior DevOps analyst must possess a combination of systems, technology and architecture experience optimizing cost, reliability, security, performance, and operational efficiency in order to drive innovation in both DevOps technology and processes. Responsibilities: Collaborate with Solution/Data Architect to provision and automate cloud infrastructure for hosting data applications using Terraform. Create, maintain, and enhance pipelines for Continuous Integration ( CI ) and Continuous Deployment ( CD) of infrastructure and application code. Implement Cigna's security standards and controls governing cloud-based systems by partnering with information protection/security team. Adhere to Cignas cloud compliance requirements for AWS accounts. Monitor and log important network, system and application activity utilizing industry standard tools. Troubleshoot issues based on alerts and logs. Administer Linux and Windows based systems. Develop KPIs that provide in-depth visibility into system health. Establish interfaces with SAML/SSO providers in Cigna. Ensure high availability, scalability, and security of production systems. Maintain awareness of industry best practices in area of DevOps and evaluate their application to the data platform. Qualifications: Required Skills: Ideal candidate must have a broad and deep technical understanding of the technologies in this field, including but not limited to IaaC, RaaS, PaaC. Experience creating integration and deployment pipelines for data platforms is required. Experience in AWS compute (EC2, Lambda), networking (VPC, Subnets, Firewalls etc.), storage (S3, EBS, EFS), security (IAM), encryption (KMS, TLS), data and analytics (Redshift, Glue, RDS ), AI/ML (SageMaker), containers (ECS, EKS) are needed for this role. Experience in Open-Source tools/technologies such as Airflow, Jenkins, Git, Github actions, Terraform, Ansible, Prometheus, Grafana, Python etc. AWS certifications (for example, AWS Certified DevOps Engineer) preferred. Desired Skills: Excellent written and verbal communication skills to effectively communicate across teams and roles. Excellent analytical / troubleshooting skills and willingness to learn and apply innovative technologies. Ability to work collaboratively in a fast-paced, agile environment. Demonstrable ability to deliver projects on time, with high quality, and within budget. Required Experience & Education: Bachelor of Science in Computer Science, Software Engineering, IT or related technical discipline, or equivalent combination of training and experience. 5+ years of hands-on technical expertise in DevOps using AWS cloud. Work Shift : 1 to 10 PM IST - Door to Door Pick and drop

Posted 2 months ago

Apply

2 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Do you want to make a global impact on patient health? Do you thrive in a fast-paced environment that integrates scientific, clinical, and commercial domains through engineering, data science, and AI? Join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) to leverage cutting-edge technology for critical business decisions and enhance customer experiences for colleagues, patients, and physicians. Our team of engineering, data science, and AI professionals is at the forefront of Pfizer’s transformation into a digitally driven organization, using data science and AI to change patients’ lives. The Data Science Industrialization team is a key driver of Pfizer’s digital transformation, leading process and engineering innovations to advance AI and data science applications from prototypes and MVPs to full production. As a Sr. Associate, AI and Data Science Full Stack Engineer, you will join the Data Science Industrialization team. Your responsibilities will include implementing AI solutions at scale for Pfizer business. You will iteratively develop and continuously improve data science workflows, AI based software solutions and AI components. Role Responsibilities Contribute to the end-to-end build and deployment of data science and analytics products and AI modulesDevelop server-side logic using back-end technologies such as PythonContribute to the implementation of data ETL pipelines using Python and SQLBuild web applications with JavaScript frameworksBuild data visualizations and data applications to enable data exploration and insights generationMaintain infrastructure and tools for software development and deploying using IaC toolsAutomate processes for continuous integration, delivery, and deployment (CI/CD pipeline) to ensure smooth software deliveryImplement logging and monitoring tools to gain insights into system behavior. Collaborate with data scientists, engineers, and colleagues from across Pfizer to integrate AI and data science models into production solutionsBuild data visualizations and data applications to enable data exploration and insights generation (e.g. Tableau, Power BI, Dash, Shiny, Streamlit, etc.). Stay up-to-date with emerging technologies and trends in your field. Basic Qualifications Bachelor's or Master's degree in Computer Science, or a related field (or equivalent experience). 2+ years of experience in software engineering, data science, or related technical fields. Experience in programming languages such as Python or RFamiliarity with back-end technologies, databases (SQL and NoSQL), and RESTful APIs. Familiarity with data manipulation and preprocessing techniques, including data cleaning, data wrangling and feature engineering. Highly self-motivated, capable of delivering both independently and through strong team collaboration. Ability to creatively tackle new challenges and step outside your comfort zone. Strong English communication skills (written and verbal). Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems or related disciplineExperience in CI/CD integration (e.g. GitHub, GitHub Actions) and containers (e.g. docker)Understanding of statistical modeling, machine learning algorithms, and data mining techniques. Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platformsKnowledge about BI backend concept like Star Schema and SnowflakeFamiliarity in building low code dashboard solution tools like Tableau, Power BI, Dash and StreamlitExperience developing dynamic and interactive web applications; familiar with React, AngularJS, Vue. Experience with Infrastructure as Code (IaC) tools such as Terraform, Ansible, or CloudformationFamiliarity with cloud-based analytics ecosystems (e.g., AWS, Snowflake). Hands on experience working in Agile teams, processes, and practices Ability to work non-traditional work hours interacting with global teams spanning across the different regions (eg: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech

Posted 3 months ago

Apply

5 - 8 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Do you want to make an impact on patient health around the world? Do you thrive in a fast-paced environment that brings together scientific, clinical, and commercial domains through engineering, data science, and AI? Then join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) where you can leverage cutting-edge technology to inform critical business decisions and improve customer experiences for our colleagues, patients and physicians. Our collection of engineering, data science, and AI professionals are at the forefront of Pfizer’s transformation into a digitally driven organization that leverages data science and AI to change patients’ lives. The Data Science Industrialization team within Data Science Solutions and Initiatives is a critical driver and enabler of Pfizer’s digital transformation, leading the process and engineering innovation to rapidly progress early AI and data science applications from prototypes and MVPs to full production. As a Senior Manager, AI and Data Science Solution Engineer, you will be a technical expert within the Data Science Industrialization team charged with architecting and implementing AI solutions and reusable AI components. You will identify, design, iteratively develop, and continuously improve reusable components for AI that accelerate use case delivery. You will implement best practices and maintain standards for AI application and API development, data engineering and data pipelining, data science and ML engineering, and prompt engineering to enable understanding and re-use, drive scalability, and optimize performance. In addition, you will be responsible for providing critical input into the AI ecosystem and platform strategy to promote self-service, drive productization, and collaboration, and foster innovation. Role Responsibilities Develop scalable and reliable, AI solutions and reusable software componentsAs a tech lead, enforce coding standards, best practices, and thorough testing (unit, integration, etc.) to ensure reliability and maintainabilityDefine and implement robust API and integration strategies to seamlessly connect reusable AI components with broader systemsDefine and implement robust technical strategies in areas such as API integration to connect reusable AI components with broader systems, industrialized AI accelerators, and the delivery of scalable AI solutionsDemonstrate a proactive approach to identifying and resolving potential system issuesTrain and guide junior developers on concepts such as data analytics, machine learning, AI, and software development principles, tools, and best practicesFoster a collaborative learning environment within the team by sharing knowledge and expertiseAct as a subject matter expert for solution engineering on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support for software development needsDirect research in areas such as data science, software development, data engineering and data pipelines, and prompt engineering, and contribute to the broader talent building framework by facilitating related trainingsCommunicate value delivered through reusable AI components to end user functions (e.g., Chief Marketing Office, PBG Commercial and Medical Affairs) and evangelize innovative ideas of reusable & scalable development approaches/frameworks/methodologies to enable new ways of developing AI solutionsProvide strategic and technical input to the AI ecosystem including platform evolution, vendor scan, and new capability developmentPartner with AI use case development teams to ensure successful integration of reusable components into production AI solutionsPartner with AIDA Platforms team on end to end capability integration between enterprise platforms and internally developed reusable component accelerators (API registry, ML library / workflow management, enterprise connectors)Partner with AIDA Platforms team to define best practices for reusable component architecture and engineering principles to identify and mitigate potential risks related to component performance, security, responsible AI, and resource utilization Basic Qualifications Bachelor’s degree in AI, data science, or computer engineering related area (Data Science, Computer Engineering, Computer Science, Information Systems, Engineering or a related discipline)7+ years of work experience in data science, analytics, or solution engineering, with a track record of building and deploying complex software systemsRecognized by peers as an expert in data science, AI, or software engineering with deep expertise in data science or backend solution architecture, and hands-on developmentExpert knowledge of backend technologies; familiar with containerization technologies like Docker; understanding of API design principles; experience with distributed systems and databases; proficient in writing clean, efficient, and maintainable codeStrong understanding of the Software Development Life Cycle (SDLC) and data science development lifecycle (CRISP)Demonstrated experience interfacing with internal and external teams to develop innovative AI and data science solutionsExperience working in a cloud based analytics ecosystem (AWS, Snowflake, etc)Highly self-motivated to deliver both independently and with strong team collaborationAbility to creatively take on new challenges and work outside comfort zoneStrong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems or related disciplineExperience in solution architecture & designExperience in software/product engineeringStrong hands-on skills in ML engineering and data science (e.g., Python, R, SQL, industrialized ETL software)Experience with data science enabling technology, such as Dataiku Data Science Studio, AWS SageMaker or other data science platformsExperience in CI/CD integration (e.g. GitHub, GitHub Actions or Jenkins)Deep understanding of MLOps principles and tech stack (e.g. MLFlow)Experience with Dataiku Data Science StudioHands on experience working in Agile teams, processes, and practices Ability to work non-traditional work hours interacting with global teams spanning across the different regions (eg: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech

Posted 3 months ago

Apply

2 - 7 years

4 - 9 Lacs

Mumbai

Work from Office

Naukri logo

- Project hands-on experience in AWS cloud services - Good knowledge of SQL and experience in working with databases like Oracle, MS SQL etc - Experience with AWS services such as S3, RDS, EMR, Redshift, Glue, Sagemaker, Dynamodb, Lambda,

Posted 3 months ago

Apply

7 - 12 years

9 - 14 Lacs

Chennai

Work from Office

Naukri logo

AWS Cloud Expertise: Solid experience with AWS cloud infrastructure, specifically using CDK or CloudFormation templates. Infrastructure Management: Responsible for planning, implementing, and scaling AWS cloud infrastructure. CI/CD Pipelines: Implement and maintain continuous integration/continuous delivery (CI/CD) pipelines for automated infrastructure provisioning. Collaboration: Work closely with architecture and engineering teams to design and implement scalable software services. Data & AI Platforms: Experience designing and building data and AI platform environments on AWS, utilizing services like EMR, EKS, EC2, ELB, RDS, Lambda, API Gateway, Kinesis, S3, DynamoDB, ECS, OpenSearch, and SageMaker. Cloud-Native Applications: Experience building and maintaining cloud-native applications. Automation Skills: Strong automation skills, particularly with Python. DevOps Tools: Proficiency with DevOps tools such as Docker, GitHub, GitHub Actions, Kubernetes, and SonarQube. Monitoring: Experience with monitoring solutions like CloudWatch, the ELK stack, and Prometheus. Infrastructure as Code (IaC): Understanding and experience writing Infrastructure-as-Code (IaC) using tools like CloudFormation or Terraform. Scripting: Proficient in script development and various scripting languages. Troubleshooting: Experience troubleshooting distributed systems. Communication: Excellent communication and collaboration skills. Key Skills: AWS Services, GitHub, GitHub Actions, Docker, Groovy Script, Python, and TypeScript

Posted 3 months ago

Apply

15 - 20 years

40 - 55 Lacs

Bengaluru

Work from Office

Naukri logo

Location: Pan India Grade: E1 The Opportunity: Capgemini is seeking a Director/ Senior Director level Executive for AWS Practice Lead. This person should have: 15+ years of experience with at least 10 in Data and Analytics Domain of which minimum 3 years on big data and Cloud Multi-skilled professional with strong experience in Architecture and Advisory, Offer and Asset creation, People hiring and training. Experience on at least 3 sizeable AWS engagements spanning over 18 months as a Managing Architect / Advisor preferably both migration and cloud native implementations. Hands-on experience on at least 4 native services like EMR, S3, Glue, Lambda, RDS, Redshift, Sagemaker, Quicksight, Athena, Kinesis. Client facing with strong communication and articulation skills. Should be able engage with CXO level audiences. Must be hands-on in writing solutions, doing estimations in support of RFPs. Strong in Data Architecture and management DW, Data Lake, Data Governance, MDM. Able to translate business and technical requirements into Architectural components Nice to have: Multi-skilled professional with strong experience in Deal solutioning, Creating GTM strategy, Delivery handholding Must be aware of relevant leading tools and concepts in industry Must be flexible for short-term travel up to 3 months across countries Must be able to define new service offerings and support GTM strategy Must have exposure to initial setup of activities, including infrastructure setup like connectivity, security policies, configuration management, Devops etc. Architecture certification preferred - TOGAF or other Industry acknowledges Experience with replication, high availability, archiving, backup & restore, and disaster recovery/business continuity data best practices. Our Ideal Candidate: Strong Behavioral & Collaboration Skills Excellent verbal and written communication skills. Should be and good listener, logical and composed in explaining point of views. Ability to work in collaborative, cross-functional, and multi-cultural teams. Excellent leadership skills, with the ability to generate stakeholder buy-in and lead through influence at a senior management level. Should be very good in negotiation skills and ability to handle conflict situations.

Posted 3 months ago

Apply

4 - 7 years

16 - 22 Lacs

Pune, Coimbatore, Hyderabad

Work from Office

Naukri logo

Job Title: SSE - AWS MLops engineer Primary Skill: Sagemaker, Lambda, Python Location: Hyderabad/Pune/Coimbatore Mode of work: 5 Days Work from Office Experience: 4 to 7 years Job Summary:As a MLops Engineer, you will play a crucial role in developing and implementing robust MLops infrastructures, ensuring data integrity, optimizing ML workflows, and enabling advanced analytics capabilities. You will work closely with cross-functional teams to understand business requirements, design CICD solutions, and collaborate with DS team across the organization.Responsibilities Model Deployment : Deploy, monitor, and manage machine learning models in AWS environments (SageMaker, EC2, Lambda). Automation : Develop and maintain CI/CD pipelines for ML workflows using tools like Gitlab,AWS CodePipeline, CodeBuild, and Jenkins. Infrastructure Management: Design and manage scalable, reliable, and cost-effective AWS infrastructure for ML workloads (S3, RDS, DynamoDB, etc.). Monitoring and Logging: Implement monitoring and logging solutions to ensure models are performing as expected (CloudWatch, Sagemaker Model Monitor). Collaboration : Work closely with Data Scientists and DevOps teams to integrate ML models into production environments. RequirementsMust Have: 4+ years of experience in MLOps, DevOps, or related fields. Should be able to drive the requirements, follow-up, collaboration with cross teams Hands-on experience with AWS services like SageMaker, EC2, Lambda, S3, and RDS. Proficiency in Python and experience with ML frameworks like TensorFlow, PyTorch, or Scikit-Learn. Experience with CI/CD tools and best practices Familiarity with Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation. Knowledge of data engineering tools and practices. Knowledge of Kubernetes or Docker. Effective communication and collaboration skills, with the ability to effectively interact with stakeholders at all levels.

Posted 3 months ago

Apply

5 - 10 years

19 - 34 Lacs

Pune, Gurgaon, Noida

Work from Office

Naukri logo

Hi, Iris Software is hiring ML Ops Engineer for Noida, Gurugram and Pune location Required Skills - SageMaker, PySpark, AWS Services, MLOps Shift time - 1:00 pm to 10:00 pm JD - 5+ years of experience in ML Ops engineering or a related field. Strong proficiency in Python , with experience in machine learning libraries such as TensorFlow , PyTorch , scikit-learn , etc. Extensive experience with ML Ops frameworks like Kubeflow , MLflow , TensorFlow Extended (TFX) , KubeFlow Pipelines , or similar. Strong experience in deploying and managing machine learning models in cloud environments (AWS, GCP, Azure). Proficiency in managing CI/CD pipelines for ML workflows using tools such as Jenkins , GitLab , CircleCI , etc. Hands-on experience with containerization (Docker) and orchestration (Kubernetes) technologies for model deployment. Interested candidates please share your resume on anu.c@irissoftware.com with below details - Current Company - Total Experience - Relevant Experience in Python - Relevant Experience in Machine learning - Relevant Experience in React- Current CTC - Expected CTC - Notice Period, Is serving, Please share LWD - Current location - Open for shift time 1:00 pm to 10:00 pm (Yes/No) - Reason for Job change - Are you open for location Noida, Gurugram or Pune (Yes/No) - Regards, Anu

Posted 3 months ago

Apply

4 years

0 Lacs

Andhra Pradesh, India

Linkedin logo

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. As part of our Analytics and Insights Consumption team, you’ll analyze data to drive useful insights for clients to address core business issues or to drive strategic outcomes. You'll use visualization, statistical and analytics models, AI/ML techniques, Modelops and other techniques to develop these insights. Years of Experience: Candidates with 4+ years of hands on experience Must Have Internal & External stakeholder managementFamiliarity with the CCaaS domain, CCaaS Application Development , contact center solution design & presales consulting.In-depth knowledge of CCaaS platforms like MS DCCP, Amazon Connect, NICECXOne, Genesys Cloud , Cisco Webex CC, Cisco HCS, UCCE/PCCE etc., including their architecture, functionalities, and application development, integration capabilitiesGovernance & communication skillsHands-on configuration of Gen AI, LLM to be built on top of CCaaS platforms/Domain (MS DCCP, Amazon Connect, Genesys Cloud/NICE CX) includes,Develop and implement generative AI models to enhance customer interactions, including chatbots, virtual agents, and automated response systems.Speech scientist & speech, conversational fine-tuning (grammar & pattern analysis)Collaborate with stakeholders to identify business needs and define AI-driven solutions that improve customer experiences.Analyze existing customer service processes and workflows to identify areas for AI integration and optimization.Create and maintain documentation for AI solutions, including design specifications and user guides.Monitor and evaluate the performance of AI models, making adjustments as necessary to improve accuracy and effectiveness.Stay updated on the latest advancements in AI technologies and their applications in customer service and contact centers.Conduct training sessions for team members and stakeholders on the use and benefits of AI technologies in the contact center.Understanding of the fundamental ingredients of enterprise integration including interface definitions and contracts; REST APIs or SOAP web services; SQL,MY SQL, Oracle , PostgreSQL , Dynamo DB, S3, RDSProvide effective real time demonstrations of CCaaS & AI (Bots) platformsHigh proficiency in defining top notch customer facing slides/presentationsGen AI,LLM platforms MUST have technologies includes Copilot, Copilot Studio, Amazon Bedrock, Amazon Titan, Sagemaker, Azure OpenAI, Azure AI Services, Google Vertex AI, Gemini AI.Proficiency in data visualization tools like Tableau, Power BI, Quicksight and others Nice To Have Experience in CPaaS platforms (Twilio, Infobip) for synergies between Communication Platform As A Service & Contact Center As a ServiceUnderstanding of cloud platforms (e.g., AWS, Azure, Google Cloud) and their services for scalable data storage, processing, and analyticsWork on high velocity Presales solution consulting engagements (RFP, RFI, RFQ)Define industry specific use cases (BFS & I, Telecom, Retail, Manlog etc)Work on high volume presales consulting engagements including solution design document definition, commercial construct (CCaaS)Defining Business Case

Posted 4 months ago

Apply

0.0 - 4.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Experience 2-5 years Years Location Indore Type On Site. Skills/Requirements: Python AWS Spark GCP ETL Pipeline Hadoop Kafka Apache Airflow Scala Job Description: Job Title : Data Engineer (2-4 Years Experience) Location : Indore, Madhya Pradesh Company : Golden Eagle IT Technologies Pvt. Ltd. Job Description : Golden Eagle IT Technologies Pvt. Ltd. is seeking a skilled Data Engineer with 2 to 4 years of experience to join our team in Indore. The ideal candidate will have a strong background in data engineering, big data technologies, and cloud platforms. You will work on designing, building, and maintaining efficient, scalable, and reliable data pipelines. Key Responsibilities : Develop and maintain ETL pipelines using tools like Apache Airflow, Spark, and Hadoop. Design and implement data solutions on AWS, including services such as DynamoDB, Athena, Glue Data Catalog, and SageMaker. Work with messaging systems like Kafka to manage data streaming and real-time data processing. Utilize Python and Scala for data processing, transformation, and automation. Ensure data quality and integrity across multiple sources and formats. Collaborate with data scientists, analysts, and other stakeholders to understand data needs and deliver solutions. Optimize and tune data systems for performance and scalability. Implement best practices for data security and compliance. Preferred Skills (Plus Points) : Experience with infrastructure as code tools like Pulumi. Familiarity with GraphQL for API development. Experience with machine learning and data science workflows, especially using SageMaker. Qualifications : Bachelor's degree in Computer Science, Information Technology, or a related field. 2-4 years of experience in data engineering or a similar role. Proficiency in AWS cloud services and big data technologies. Strong programming skills in Python and Scala. Knowledge of data warehousing concepts and tools. Excellent problem-solving and communication skills.

Posted 5 months ago

Apply

5 - 8 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. To start a career that is out of the ordinary, please apply... Job Details KANTAR is the world's leading insights, consulting, and analytics company. We understand how people think, feel, shop, share, vote, and view more than anybody else. With over 25,000 people, we combine the best of human understanding with advanced technologies to help the world's leading organizations, succeed and grow. (For more details, visit www.kantar.com) The Global Data Science and Innovation (DSI) unit of KANTAR, nested within its Analytics Practice (https://www.kantar.com/expertise/analytics), is a fast-growing team of niche, elite data scientists responsible for all data science led innovation within KANTAR. The unit has a strong internal and external reputation with global stakeholders and clients, of handling sophisticated cutting-edge analytics, using state-of-the-art techniques and deep mathematical / statistical rigor. The unit is responsible for most AI and Gen AI related initiatives within KANTAR (https://www.kantar.com/campaigns/artificial-intelligence), including bringing in the latest developments in the field of Machine Learning, Generative AI, Deep Learning, Computer Vision, NLP, Optimization, etc. to solve complex business problems in marketing analytics and consulting and build products that empower our colleagues around the world. Job profile We are looking for an Senior AI Engineer to be part of our Global Data Science and Innovation team. As part of a high-profile team, the position offers a unique opportunity to work first-hand on some of the most exciting, and challenging AI-led projects within the organization, and be part of a fast-paced, entrepreneurial environment. As a senior member of the team, you will be responsible for working with your leadership team to build a world-class portfolio of AI-led solutions within KANTAR leveraging the latest developments in AI/ML. You will be part of several initiatives to productionize multiple R&D PoCs and pilots that leverage a variety of AI/ML algorithms and technologies, particularly using (but not restricted to) Generative AI. As an experienced AI engineer, you will hold yourself accountable for the entire process of developing, scaling, and commercializing these enterprise-grade products and solutions. You will be working hands-on as well as with a team of highly talented cross-functional, geography-agnostic team of data scientists and AI engineers. As part of the global data science and innovation team, you will be a representative and ambassador for data science/AI/ML led solutions with internal and external stakeholders. Job Description Candidate will be responsible for the following: Develop and maintain scalable and efficient AI pipelines and infrastructure for deployment.Deploy AI models and solutions into production environments, ensuring stability and performance.Monitor and maintain deployed AI systems to ensure they operate effectively and efficiently.Troubleshoot and resolve issues related to AI deployment, including performance bottlenecks and system failures.Optimize deployment processes to reduce latency and improve the scalability of AI solutions.Implement robust version control and model management practices to track AI model changes and updates.Ensure the security and compliance of deployed AI systems with industry standards and regulations.Provide technical support and guidance for deployment-related queries and issues. Qualification, Experience, And Skills Advanced degree from top tier technical institutes in relevant discipline4 to 10 years’ experience, with at least past few years working in Generative AIPrior firsthand work experience in building and deploying applications on cloud platforms like Azure/AWS/Google Cloud using serverless architectureProficiency in tools such as Azure machine learning service, Amazon Sagemaker, Google Cloud AIPrior experience with containerization tools (for ex., Docker, Kubernetes), databases (for ex., MySQL, MongoDB), deployment tools (for ex., Azure DevOps), big data tools (for ex.,Spark).Ability to develop and integrate APIs. Experience with RESTful services.Experience with continuous integration/continuous deployment (CI/CD) pipelines.Knowledge of Agile working methodologies for product developmentKnowledge of (and potentially working experience with) LLMs and Foundation models from OpenAI, Google, Anthropic and othersHands on coding experience in Python Desired Skills That Would Be a Distinct Advantage Preference given to past experience in developing/maintaining live deployments.Comfortable working in global set-ups with diverse cross-geography teams and cultures.Energetic, self-driven, curious, and entrepreneurial.Excellent (English) communication skills to address both technical audience and business stakeholders.Meticulous and deep attention to detail.Being able to straddle ‘big picture’ and ‘details’ with ease. Location Chennai, Teynampet, Anna SalaiIndia Kantar Rewards Statement At Kantar we have an integrated way of rewarding our people based around a simple, clear and consistent set of principles. Our approach helps to ensure we are market competitive and also to support a pay for performance culture, where your reward and career progression opportunities are linked to what you deliver. We go beyond the obvious, using intelligence, passion and creativity to inspire new thinking and shape the world we live in. Apply for a career that’s out of the ordinary and join us. We want to create an equality of opportunity in a fair and supportive working environment where people feel included, accepted and are allowed to flourish in a space where their mental health and well being is taken into consideration. We want to create a more diverse community to expand our talent pool, be locally representative, drive diversity of thinking and better commercial outcomes. Kantar is the world’s leading data, insights and consulting company. We understand more about how people think, feel, shop, share, vote and view than anyone else. Combining our expertise in human understanding with advanced technologies, Kantar’s 30,000 people help the world’s leading organisations succeed and grow.

Posted 9 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies