Home
Jobs
Companies
Resume

36 Aws Codedeploy Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 5.0 years

2 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Ensemble Energy is an exciting startup in industrial IoT space focused on energy. Our mission is to accelerate the clean energy revolution by making it more competitive using the power of data. Ensembles AI enabled SaaS platform provides prescriptive analytics to power plant operators by combining the power of machine learning, big data, and deep domain expertise. As a Full Stack/IOT Intern, you will be participating in developing and deploying frontend/backend applications, creating vizualization dashboards and developing ways to integrate high frequency data data from devices onto our platform. Required Skills & Experience: React/Redux, HTML5, CSS3, JavaScript, Python, Django and REST APIs. BS or MS in Computer Science or related field. Strong foundation in Computer Science, with deep knowledge of data structures, algorithms, and software design. Experience with GIT, CI/CD tools, Sentry, Atlassian software and AWS CodeDeploy a plus Contribute with ideas to overall product strategy and roadmap. Improve codebase with continuous refactoring. Self-starter to take ownership of the platform engineering and application development. Work on multiple projects simultaneously and get things done. Take products from prototype to production. Collaborate with team in Sunnyvale, CA to lead 24x7 product development. Bonus: If you have worked on one or more below then highlight those projects when applying: Experience with Time Series DB - M3DB, Prometheus, InfluxDB, OpenTSDB, ELK Stack Experience with visualization tools like Tableau, KeplerGL etc. Experience with MQTT or other IoT communication protocols a plus

Posted -1 days ago

Apply

8.0 - 12.0 years

35 - 50 Lacs

Bengaluru

Work from Office

Naukri logo

Role : MLOps Engineer Location – PAN India Notice: Immediate to 60 days Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted -1 days ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced DevOps Engineer to join our team, focusing on automating and optimizing our deployment processes. The candidate will be responsible for setting up GitHub Actions and AWS Code Pipeline to build and deploy Kore Chatbot and Product Training applications in the AWS environment. This position plays a crucial role in ensuring efficient, error-free deployments and upholding strong continuous integration and continuous delivery (CI/CD) practices. About the Role: CI/CD Pipeline Management: Design, implement, and maintain CI/CD pipelines using tools such as GitHub Actions and AWS Code Pipeline. Debug and resolve issues within the CI/CD pipelines to ensure smooth and efficient operations. Develop build and deployment scripts for multiple CI/CD pipelines to support various applications and services. Build and Deploy various technology stacks such as Java, Angular, NodeJS, React etc., Infrastructure as Code (IaC): Utilize IaC tools like CloudFormation and Terraform to automate cloud infrastructure management. Manage and optimize cloud resources on platforms such as AWS (ECS, Lambda, EC2, API Gateway, IAM, S3, CloudFront, RDS) and Google Cloud Platform (GCP). Containerization and Orchestration: Create, manage, and deploy Docker containers and Kubernetes services to support scalable application deployment. Write and maintain Dockerfiles for containerized applications. Collaboration and Support: Collaborate with development teams to understand build and deployment requirements for various technology stacks, including AWS, Angular, React, and others. Ensure all build, deployment, and automation needs are met for platforms such as Optimus, Kore, Docebo, Kaltura, and others. Process Improvement: Continuously improve and streamline existing processes for enhanced efficiency and reliability. Utilize scripting tools like Python and Bash to streamline deployments. Monitoring and Performance: Implement and manage Datadog monitoring to ensure system performance and reliability. Set up metrics, dashboards, and alarms in Datadog. Team members are tasked with deploying packages to Development, QA, and Production environments. Create build and deployment scripts for multiple CI/CD pipelines to support a range of applications and services. Troubleshoot and resolve issues within CI/CD pipelines to ensure smooth and efficient operations. Maintain Version based pipelines and perform any bug fixes if any. Create a base image pipeline as a one-time task. Integrate UI services with Datadog for continuous and detailed monitoring. Set up metrics, dashboards, and alarms in Datadog for UI services. Conduct a proof of concept for the LMS API Gateway. Implement AWS secrets management through the Infrastructure as Code pipeline. Automate the rotation of JFrog keys leveraging AWS Lambda. About You: A bachelors degree in computer science, Engineering, or a related discipline. Over 5 years of experience in DevOps roles, with a focus on CI/CD pipelines and cloudser vices. Proven experience in DevOps or similar role. Strong expertise in CI/CD tools such as GitHub Actions and AWS Code Pipeline. Proficient in using IaC tools like CloudFormation and Terraform. Experience with cloud platforms, particularly AWS and GCP. Solid understanding of Docker and Kubernetes. Strong scripting skills in Python and Bash. Excellent problem-solving and communication skills. Ability to work collaboratively in a team environment. #LI-AD2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted -1 days ago

Apply

5.0 - 6.0 years

55 - 60 Lacs

Pune

Work from Office

Naukri logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fuelled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of "22.5 billion.

Posted 22 hours ago

Apply

7.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Familiarity with CI/CD tools and DevOps practices. Expertise in data modeling, ETL processes, and data warehousing.

Posted 5 days ago

Apply

0.0 - 1.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

Client - Cisco Experience - 0-1 Year Location - Bangalore only (WFO all 5 days) Job Summary: AWS Engineer with approximately 6 months of experience to join our growing team. The ideal candidate will have hands-on experience in designing, deploying, and managing AWS cloud infrastructure. You will work closely with development and operations teams to ensure the reliability, scalability, and security of our cloud-based applications. Responsibilities: Design, implement, and maintain AWS cloud infrastructure using best practices. Deploy and manage applications on AWS services such as EC2, S3, RDS, VPC, and Lambda. Implement and maintain CI/CD pipelines for automated deployments. Monitor and troubleshoot AWS infrastructure and applications to ensure high availability and performance. Implement security best practices and ensure compliance with security policies. Automate infrastructure tasks using infrastructure-as-code tools (e.g., CloudFormation, Terraform). Collaborate with development and operations teams to resolve technical issues. Document infrastructure configurations and operational procedures. Participate in on-call rotations as needed. Optimize AWS costs and resource utilization. Required Skills and Qualifications: Bachelor s degree in computer science, Information Technology, or a related field. 6 Months of hands-on experience1 with AWS cloud services. Proficiency in AWS services such as EC2, S3, RDS, VPC, IAM, and Lambda. Experience with infrastructure-as-code tools (e.g., CloudFormation, Terraform). Experience with CI/CD pipelines and tools (e.g., Jenkins, AWS Code Pipeline). Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Desire to learn new technologies.

Posted 1 week ago

Apply

3.0 - 7.0 years

2 - 6 Lacs

Hyderabad, Pune, Gurugram

Work from Office

Naukri logo

Location Pune, Hyderabad, Gurgaon, Bangalore [Hybrid] : Python, Pyspark, SQL, AWS Services - AWS Glue, S3, IAM, Athena, AWS CloudFormation, AWS Code Pipeline, AWS Lambda, Transfer Family, AWS Lake Formation, and CloudWatch, CI/CD automation of AWS CloudFormation stacks Not Ready to Apply Join our talent pool and we'll reach out when a job fits your skills.

Posted 1 week ago

Apply

8.0 - 11.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Naukri logo

Role : MLOps Engineer Location - Coimbatore Mode of Interview - In Person Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 1 week ago

Apply

8.0 - 10.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Job Summary We are seeking a Senior Cloud Engineer with 8 to 10 years of experience to join our team. The ideal candidate will have extensive experience with various AWS services and a strong background in cloud infrastructure. This role involves designing implementing and maintaining cloud solutions to support our business objectives. The work model is hybrid and the shift is day. No travel is required. Responsibilities Lead the design and implementation of scalable cloud solutions using AWS services. Oversee the deployment and management of applications on AWS ECS and AWS EKS. Provide expertise in Amazon DynamoDB for database management and optimization. Implement serverless architectures using Amazon Lambda. Manage data storage solutions with Amazon S3 Glacier. Ensure secure and reliable connections using AWS Direct Connect. Monitor and troubleshoot applications with AWS X-Ray. Automate CI/CD pipelines using AWS CodePipeline CodeDeploy CodeCommit and CodeBuild. Integrate event-driven architectures with Amazon EventBridge. Administer AWS Organizations and AWS License Manager for account and license management. Manage virtual desktops with AWS Workspaces. Utilize Amazon DocumentDB for document database solutions. Implement and manage AWS Systems Manager for operational insights and automation. Ensure secure communications with AWS Certificate Manager. Use AWS OpsWorks for configuration management. Provide recommendations using AWS Trusted Advisor and AWS Well-Architected Framework. Manage single sign-on with AWS SSO. Collaborate with cross-functional teams to ensure cloud solutions align with business goals. Maintain documentation and provide training to team members on cloud best practices. Stay updated with the latest AWS services and features to continuously improve our cloud infrastructure. Qualifications Must have experience with Amazon DynamoDB AWS ECS Amazon Lambda Amazon S3 Glacier AWS Direct Connect AWS X-Ray AWS CodePipeline AWS CodeDeploy AWS CodeCommit AWS CodeBuild Amazon EventBridge AWS Organizations AWS License Manager AWS Workspaces Amazon DocumentDB AWS Systems Manager AWS EKS AWS Certificate Manager AWS OpsWorks AWS Trusted Advisor AWS Well-Architected Framework and AWS SSO. Nice to have experience in the Devices domain. Strong problem-solving skills and ability to work in a hybrid work model. Excellent communication and collaboration skills. Ability to work independently and manage multiple tasks effectively. Strong understanding of cloud security best practices. Experience with infrastructure as code and automation tools. Proven track record of delivering high-quality cloud solutions. Certifications Required AWS Certified Solutions Architect AWS Certified DevOps Engineer

Posted 2 weeks ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Role : MLOps Engineer Location - Kochi Mode of Interview - In Person Data - 14th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

13 - 22 Lacs

Chennai

Work from Office

Naukri logo

We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000 experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! REQUIREMENTS: Total experience 3+years. Hands-on experience with Java, JavaScript, Angular, Spring Boot, and REST APIs. Experience with Selenium, REST Assured, and Figma for design collaboration. Familiarity with PostgreSQL and MySQL; strong SQL skills. Proficiency with CI/CD tools such as Maven and Jenkins. Experience using version control systems like GIT or SVN. Experience in developing and supporting web-based applications. Strong understanding of UML and design patterns. Proficiency in Service-Oriented Architecture (SOA) and Web Services (Apache CXF, JAX-WS, JAX-RS, SOAP, REST). Hands-on experience with multithreading, and cloud development. Strong working experience in Data Structures and Algorithms, Unit Testing, and Object-Oriented Programming (OOP) principles. Exposure to cloud platforms like AWS or Azure (AWS Code Deploy preferred). Should have good communication and customer handling skills. RESPONSIBILITIES: Writing and reviewing great quality code Understanding the clients business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements Mapping decisions with requirements and be able to translate the same to developers Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design documents explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements.

Posted 3 weeks ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Job Summary We are seeking a skilled Developer with 3 to 6 years of experience to join our team. The ideal candidate will have expertise in AWS DevOps Python and SQL. This role involves working in a hybrid model with day shifts and no travel requirements. The candidate will contribute to the companys purpose by developing and maintaining high-quality software solutions. Responsibilities Develop and maintain software applications using AWS DevOps Python and SQL. Collaborate with cross-functional teams to design and implement new features. Ensure the scalability and reliability of applications through effective coding practices. Monitor and optimize application performance to meet user needs. Provide technical support and troubleshooting for software issues. Implement security best practices to protect data and applications. Participate in code reviews to maintain code quality and consistency. Create and maintain documentation for software applications and processes. Stay updated with the latest industry trends and technologies to enhance skills. Work in a hybrid model balancing remote and in-office work as needed. Communicate effectively with team members and stakeholders to ensure project success. Contribute to the continuous improvement of development processes and methodologies. Ensure timely delivery of projects while maintaining high-quality standards. Qualifications Possess a strong understanding of AWS DevOps including experience with deployment and management of applications on AWS. Demonstrate proficiency in Python programming with the ability to write clean and efficient code. Have experience with SQL for database management and querying. Show excellent problem-solving skills and attention to detail. Exhibit strong communication and collaboration skills. Be adaptable to a hybrid work model and able to manage time effectively.

Posted 3 weeks ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Hyderabad

Work from Office

Naukri logo

Tittle Kubernetes modernization Engineers Experience 5 - 7 years Location Hyderabad/Bangalore Key Responsibilities - Lead Automation Design and implement an automation framework to migrate workloads to Kubernetes platforms such as AWS EKS, Azure AKS, Google GKE, Oracle OKE, and OpenShift. - Develop Cloud-Native Automation Tools Build automation tools using Go (Golang) for workload discovery, planning, and transformation into Kubernetes artifacts. - Migrate Kubernetes Across Cloud Providers Plan and execute seamless migrations of Kubernetes workloads from one cloud provider to another (AWS - Azure, GCP - OCI, etc.) with minimal disruption. - Leverage Open-Source Technologies Utilize Helm, Kustomize, ArgoCD, and other popular open-source frameworks to streamline cloud-native adoption. - CI/CD & DevOps Integration Architect and implement CI/CD pipelines using Jenkins (including Jenkinsfile generation) and cloud-native tools like AWS CodePipeline, Azure DevOps, and GCP Cloud Build to support diverse Kubernetes deployments. - Security & Compliance Define and enforce security best practices, implement zero-trust principles, and proactively address vulnerabilities in automation workflows. - Technical Leadership & Mentorship Lead and mentor a team of developers, fostering expertise in Golang development, Kubernetes, and DevOps best practices. - Stakeholder Collaboration Work closely with engineering, security, and cloud teams to align modernization and migration efforts with business goals and project timelines. - Performance & Scalability Ensure high performance, scalability, and security across automation frameworks and multi-cloud Kubernetes deployments. - Continuous Innovation Stay ahead of industry trends, integrating emerging tools and methodologies to enhance automation and Kubernetes portability. Qualifications & Experience : - Education Bachelor's or Master's degree in Computer Science, Engineering, or a related field (or equivalent experience). - Experience 8+ years in software development, DevOps, or cloud engineering, with 3+ years in a leadership role. - Programming Expertise Strong proficiency in Go (Golang), Python for building automation frameworks and tools. - Kubernetes & Containers Deep knowledge of Kubernetes (K8s), OpenShift, Docker, and container orchestration. - Cloud & DevOps Hands-on experience with AWS, Azure, GCP, OCI, self-managed Kubernetes, OpenShift, and DevOps practices. - CI/CD & Infrastructure-as-Code Strong background in CI/CD tools (Jenkins, Git, AWS CodePipeline, Azure DevOps, GCP Cloud Build) and Infrastructure-as-Code (IaC) with Terraform, Helm, or similar tools. - Kubernetes Migration Experience Proven track record in migrating Kubernetes workloads between cloud providers, addressing networking, security, and data consistency challenges. - Security & Observability Expertise in cloud-native security best practices, vulnerability remediation, and observability solutions. - Leadership & Communication Proven ability to lead teams, manage projects, and collaborate with stakeholders across multiple domains. Preferred Skills & Certifications : - Experience in self-managed Kubernetes provisioning (e.g., kubeadm, Kubespray) and OpenShift customization (e.g., Operators). - Industry Certifications - CKA, CKAD, or cloud-specific credentials (e.g., AWS Certified DevOps Engineer). - Exposure to multi-cloud and hybrid cloud migration projects Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 3 weeks ago

Apply

11 - 14 years

35 - 50 Lacs

Chennai

Work from Office

Naukri logo

Role: MLOps Engineer Location: PAN India Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 1 month ago

Apply

1 - 6 years

2 - 7 Lacs

Noida, Gurugram

Work from Office

Naukri logo

About the Role: As a DevOps Engineer at EaseMyTrip.com, you will be pivotal in optimizing and maintaining our IT infrastructure and deployment processes. Your role involves managing cloud environments, implementing automation, and ensuring seamless deployment of applications across various platforms. You will collaborate closely with development teams to enhance system reliability, security, and efficiency, supporting our mission to provide exceptional travel experiences through robust technological solutions. This position is critical for maintaining high operational standards and driving continuous innovation. Role & responsibilities: Cloud Computing Mastery : Expert in managing Amazon Web services (AWS) environments, with skills in GCP and Azure for comprehensive cloud solutions and automation. Windows Server Expertise : Profound knowledge of configuring and maintaining Windows Server systems and Internet Information Services (IIS). Deployment of .NET Applications : Experienced in deploying diverse .NET applications such as ASP.Net, MVC, Web API, and WCF using Jenkins. Proficiency in Version Control : Skilled in utilizing GitLab or GitHub for effective version control and collaboration. Linux Server Management : Capable of administering Linux servers with a focus on security and performance optimizations. Scripting and Automation : Ability to write and maintain scripts for automation of routine tasks to improve efficiency and reliability. Monitoring and Optimization : Implement monitoring tools to ensure high availability and performance of applications and infrastructure. Security Best Practices : Knowledge of security protocols and best practices to safeguard systems and data. Continuous Integration/Continuous Deployment (CI/CD) : Develop and maintain CI/CD pipelines to streamline software updates and deployments. Collaboration and Support : Work closely with development teams to troubleshoot deployment issues and enhance the overall operational efficiency. Preferred candidate profile: Migration Project Leadership : Experienced in leading significant migration projects from planning through to execution. Database Expertise : Strong foundation in both SQL and NoSQL database technologies. Experience with Diverse Tech Stacks : Managed projects involving various technologies, including 2-tier, 3-tier, and microservices architectures. Proficiency in Automation Tools : Hands-on experience with automation and deployment tools such as Jenkins, Bamboo, and Code Deploy. Advanced Code Management : Highly skilled in managing code revisions and maintaining code integrity across multiple platforms. Strategic DevOps Experience : Proven track record in developing and implementing DevOps strategies at an enterprise level. Configuration Management Skills : Proficient in using tools like Ansible, Chef, or Puppet for configuration management. Technology Versatility : Experience working with a range of programming languages and frameworks, including .NET, MVC, LAMP, Python, and NodeJS. Problem Solving and Innovation : Ability to solve complex technical issues and innovate new solutions to enhance system reliability and performance. Effective Communication : Strong communication skills to collaborate with cross-functional teams and articulate technical challenges and solutions clearly.

Posted 1 month ago

Apply

7 - 12 years

9 - 14 Lacs

Ahmedabad

Work from Office

Naukri logo

Project Role : Technology Architect Project Role Description : Design and deliver technology architecture for a platform, product, or engagement. Define solutions to meet performance, capability, and scalability needs. Must have skills : Amazon Web Services (AWS) Good to have skills : Java Full Stack Development Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years of fulltime education. Role:Technology Architect Project Role Description:Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have Skills :Amazon Web Services (AWS), SSI:NON SSI:Good to Have Skills :SSI:Java Full Stack Development NON SSI :Job Requirements:'',//?field Key Responsibilities:1 Experience of designing multiple Cloud-native Application Architectures2 Experience of developing and deploying cloud-native application including serverless environment like Lambda 3 Optimize applications for AWS environment 4 Design, build and configure applications on AWS environment to meet business process and application requirements5 Understanding of security performance and cost optimizations for AWS6 Understanding to AWS Well-Architected best practices Technical Experience:1 8/15 years of experience in the industry with at least 5 years and above in AWS 2 Strong development background with exposure to majority of services in AWS3 AWS Certified Developer professional and/or AWs specialty level certification DevOps /Security 4 Application development skills on AWS platform with either Java SDK, Python SDK, Reactjs5 Strong in coding using any of the programming languages like Python/Nodejs/Java/Net understanding of AWS architectures across containerization microservices and serverless on AWS 6 Preferred knowledge in cost explorer, budgeting and tagging in AWS 7 Experience with DevOps tools including AWS native DevOps tools like CodeDeploy, Professional Attributes:a Ability to harvest solution and promote reusability across implementations b Self Motivated experts who can work under their own direction with right set of design thinking expertise c Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Educational Qualification:15 years of fulltime education. Additional Info:1 Application developers skills on AWS platform with either Java SDK, Python SDK, Nodejs, ReactJS 2 AWS services Lambda, AWS Amplify, AWS App Runner, AWS CodePipeline, AWS Cloud nine, EBS, Faregate,Additional comments:Only Bangalore, No Location Flex and No Level Flex Qualifications 15 years of fulltime education.

Posted 1 month ago

Apply

7 - 12 years

9 - 14 Lacs

Ahmedabad

Work from Office

Naukri logo

Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Amazon Web Services (AWS) Good to have skills : Java Full Stack Development Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years of fulltime education. Role:Technology Architect Project Role Description:Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have Skills :Amazon Web Services (AWS), SSI:NON SSI:Good to Have Skills :SSI:Java Full Stack Development NON SSI :Job Requirements:'',//?field Key Responsibilities:1 Experience of designing multiple Cloud-native Application Architectures2 Experience of developing and deploying cloud-native application including serverless environment like Lambda 3 Optimize applications for AWS environment 4 Design, build and configure applications on AWS environment to meet business process and application requirements5 Understanding of security performance and cost optimizations for AWS6 Understanding to AWS Well-Architected best practices Technical Experience:1 8/15 years of experience in the industry with at least 5 years and above in AWS 2 Strong development background with exposure to majority of services in AWS3 AWS Certified Developer professional and/or AWs specialty level certification DevOps /Security 4 Application development skills on AWS platform with either Java SDK, Python SDK, Reactjs5 Strong in coding using any of the programming languages like Python/Nodejs/Java/Net understanding of AWS architectures across containerization microservices and serverless on AWS 6 Preferred knowledge in cost explorer, budgeting and tagging in AWS 7 Experience with DevOps tools including AWS native DevOps tools like CodeDeploy, Professional Attributes:a Ability to harvest solution and promote reusability across implementations b Self Motivated experts who can work under their own direction with right set of design thinking expertise c Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Educational Qualification:15 years of fulltime education. Additional Info:1 Application developers skills on AWS platform with either Java SDK, Python SDK, Nodejs, ReactJS 2 AWS services Lambda, AWS Amplify, AWS App Runner, AWS CodePipeline, AWS Cloud nine, EBS, Faregate,Additional comments:Only Bangalore, No Location Flex and No Level Flex Qualifications 15 years of fulltime education.

Posted 1 month ago

Apply

3 - 5 years

15 - 19 Lacs

Bengaluru

Work from Office

Naukri logo

Immediate Joiners Job Summary We are seeking an experienced DevOps Engineer to join our team and help us build and maintain scalable, secure, and efficient infrastructure on Amazon Web Services (AWS). The ideal candidate will have a strong background in DevOps practices, AWS services, and scripting languages. Key Responsibilities Design and Implement Infrastructure: Design and implement scalable, secure, andefficient infrastructure on AWS using services such as EC2, S3, RDS, and VPC. Automate Deployment Processes: Automate deployment processes using tools such as AWS CodePipeline, AWS CodeBuild, and AWS CodeDeploy. Implement Continuous Integration and Continuous Deployment (CI/CD): Implement CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, and CircleCI. Monitor and Troubleshoot Infrastructure: Monitor and troubleshoot infrastructure issuesusing tools such as Amazon CloudWatch, AWS X-Ray, and AWS CloudTrail. Collaborate with Development Teams: Collaborate with development teams to ensuresmooth deployment of applications and infrastructure. Implement Security Best Practices: Implement security best practices and ensurecompliance with organizational security policies. Optimize Infrastructure for Cost and Performance: Optimize infrastructure for cost andperformance using tools such as AWS Cost Explorer and AWS Trusted Advisor. Requirements Education: Bachelors degree in Computer Science, Information Technology, or relatedfield. Experience: Minimum 3 years of experience in DevOps engineering, with a focus on AWSservices. Technical Skills: AWS services such as EC2, S3, RDS, VPC, and Lambda. Scripting languages such as Python, Ruby, or PowerShell. CI/CD tools such as Jenkins, GitLab CI/CD, and CircleCI. Monitoring and troubleshooting tools such as Amazon CloudWatch, AWS X-Ray, and AWS CloudTrail. Soft Skills: Excellent communication and interpersonal skills. Strong problem-solving and analytical skills. Ability to work in a team environment and collaborate with development teams. Nice to Have Certifications: AWS certifications such as AWS Certified DevOps Engineer or AWSCertified Solutions Architect. Experience with Containerization: Experience with containerization using Docker orKubernetes. Experience with Serverless Computing: Experience with serverless computing using AWSLambda or Azure Functions.

Posted 1 month ago

Apply

2 - 6 years

8 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

At F5, we strive to bring a better digital world to life. Our teams empower organizations across the globe to create, secure, and run applications that enhance how we experience our evolving digital world. We are passionate about cybersecurity, from protecting consumers from fraud to enabling companies to focus on innovation. Everything we do centers around people. That means we obsess over how to make the lives of our customers, and their customers, better. And it means we prioritize a diverse F5 community where each individual can thrive. Position Summary The F5 NGINX Business Unit is seeking a Devops Software Engineer III based in India. As a Devops engineer, you will be an integral part of a development team delivering high-quality features for exciting next generation NGINX SaaS products. In this position, you will play a key role in building automation, standardization, operations support, and tools to implement and support world-class products; design, build, and maintain infrastructure, services and tools used by our developers, testers and CI/CD pipelines. You will champion efforts to improve reliability and efficiency in these environments and explore and lead efforts towards new strategies and architectures for pipeline services, infrastructure, and tooling. When necessary, you are comfortable wearing a developer hat to build a solution. You are passionate about automation and tools. Youll be expected to handle most development tasks independently, with minimal direct supervision. Primary Responsibilities Collaborate with a globally distributed team to design, build, and maintain tools, services, and infrastructure that support product development, testing, and CI/CD pipelines for SaaS applications hosted on public cloud platforms. Ensure Devops infrastructure and services maintain the required level of availability, reliability, scalability, and performance. Diagnose and resolve complex operational challenges involving network, security, and web technologies. This includes troubleshooting problems with HTTP load balancers, API gateways (e.g., NGINX proxies), and related systems. Take part in product support, bug triaging, and bug-fixing activities on a rotating schedule to ensure the SaaS service meets its SLA commitments. Consistently apply forward-thinking concepts relating to automation and CI/CD processes. Skills Experience with deploying infrastructure and services in one or more cloud environments such as AWS, Azure, Google Cloud. Experience with configuration management and deployment automation tools, such as Terraform, Ansible, Packer. Experience with Observability platforms like Grafana, Elastic Stack etc. Experience with source control and CI/CD tools like git, Gitlab CI, Github Actions, AWS Code Pipeline etc. Proficiency in scripting languages such as Python and Bash. Solid understanding of Unix OS Familiarity or experience with container orchestration technologies such as Docker and Kubernetes. Good understanding of computer networking (e.g., DNS, DHCP, TCP, IPv4/v6) Experience with network service technologies (e.g., HTTP, gRPC, TLS, REST APIs, OpenTelemetry). Qualifications Bachelor’s or advanced degree; and/or equivalent work experience. 5+ years of experience in relevant roles. The About The Role is intended to be a general representation of the responsibilities and requirements of the job. However, the description may not be all-inclusive, and responsibilities and requirements are subject to change. Please note that F5 only contacts candidates through F5 email address (ending with @f5.com) or auto email notification from Workday (ending with f5.com or @myworkday.com ) . Equal Employment Opportunity It is the policy of F5 to provide equal employment opportunities to all employees and employment applicants without regard to unlawful considerations of race, religion, color, national origin, sex, sexual orientation, gender identity or expression, age, sensory, physical, or mental disability, marital status, veteran or military status, genetic information, or any other classification protected by applicable local, state, or federal laws. This policy applies to all aspects of employment, including, but not limited to, hiring, job assignment, compensation, promotion, benefits, training, discipline, and termination. F5 offers a variety of reasonable accommodations for candidates . Requesting an accommodation is completely voluntary. F5 will assess the need for accommodations in the application process separately from those that may be needed to perform the job. Request by contacting accommodations@f5.com.

Posted 1 month ago

Apply

10 - 15 years

17 - 22 Lacs

Mumbai, Hyderabad, Bengaluru

Work from Office

Naukri logo

Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana) Career Level - IC4 Responsibilities Job roles and responsibilities : The AWS DevOps Engineer is responsible for automating, optimizing, and managing CI/CD pipelines, cloud infrastructure, and deployment processes on AWS. This role ensures smooth software delivery while maintaining high availability, security, and scalability. Design and implement scalable and secure cloud infrastructure on AWS, utilizing services such as EC2,EKS, ECS, S3, RDS, and VPC Automate the provisioning and management of AWS resources using Infrastructure as Code tools: (Terraform/ Cloud Formation / Ansible ) and YAML Implement and maintain continuous integration and continuous deployment (CI/CD) pipelines using tools like Jenkins, GitLab, or AWS CodePipeline Advocate for a No-Ops model, striving for console-less experiences and self-healing systems Experience with containerization technologies: Docker and Kubernetes Mandatory Skills: Overall experience is 5 - 8 Years on AWS Devops Speicalization (AWS CodePipeline, AWS CodeBuild, AWS CodeDeploy, AWS CodeCommit) Work experience on AWS Devops, IAM Work expertise on coding tools - Terraform or Ansible or Cloud Formation , YAML Good on deployment work - CI/CD pipelining Manage containerized workloads using Docker, Kubernetes (EKS), or AWS ECS , Helm Chart Has experience of database migration Proficiency in scripting languages (Python AND (Bash OR PowerShell)). Develop and maintain CI/CD pipelines using (AWS CodePipeline OR Jenkins OR GitHub Actions OR GitLab CI/CD) Experience with monitoring and logging tools (CloudWatch OR ELK Stack OR Prometheus OR Grafana)

Posted 1 month ago

Apply

8 - 12 years

10 - 14 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary: We are seeking a skilled and motivated AWS DevOps Engineer to join our team. The ideal candidate will have deep expertise in cloud infrastructure, automation, CI/CD pipelines, and configuration management using AWS services. You will play a key role in building scalable, reliable, and secure infrastructure while collaborating with software developers and IT teams to streamline deployment processes. Key Responsibilities: Design, implement, and maintain scalable, secure, and highly available AWS cloud environments. Develop and manage Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation. Build, manage, and optimize CI/CD pipelines for automated testing and deployment (e.g., Jenkins, GitLab CI, AWS CodePipeline). Automate routine tasks and improve system efficiency through scripting (Python, Bash, etc.). Monitor system performance, ensure availability, and conduct root cause analysis for incidents. Implement robust security practices in accordance with AWS best practices (IAM policies, security groups, encryption). Manage containerized applications using Docker and orchestration platforms like Kubernetes or Amazon ECS/EKS. Maintain and improve configuration management systems (Ansible, Chef, Puppet, or AWS Systems Manager). Ensure best practices in backup and disaster recovery plans. Collaborate with development teams to ensure smooth application releases and infrastructure changes. Stay current with industry trends and AWS services, recommending improvements and innovations. Required Skills & Qualifications: 8+ years of hands-on experience in AWS DevOps engineering. Strong understanding of AWS services: EC2, S3, RDS, Lambda, VPC, CloudWatch, CloudTrail, etc. Proficiency in Infrastructure as Code (IaC) tools like Terraform or CloudFormation. Experience building CI/CD pipelines with tools like Jenkins, GitLab CI, AWS CodePipeline. Proficient in scripting languages (Python, Bash, etc.). Knowledge of containerization and orchestration (Docker, Kubernetes, ECS, or EKS). Experience with version control systems (Git). Familiarity with monitoring and logging tools (CloudWatch, ELK stack, Prometheus, Grafana). Understanding of security best practices in cloud environments. Excellent problem-solving, troubleshooting, and communication skills. Preferred Qualifications: AWS Certified DevOps Engineer or other AWS certifications. Experience with serverless technologies (AWS Lambda, API Gateway). Familiarity with agile methodologies and DevOps culture. Background in networking, DNS, VPNs, and firewalls.

Posted 2 months ago

Apply

3 - 6 years

10 - 13 Lacs

Bengaluru, Bangalore Rural

Work from Office

Naukri logo

3+yrs of experience in AWS Cloud services, CI/CD set up. Experienced in AWS S3, EC2, Lambda, Glue, ebs, auto scaling, RDS(Postgresql), ALB, Networking, IAM, Cloud Formation, Sagemaker. Worked on Windows, Linux OS (Ubuntu/AWSLinux) & Shell scripting. Required Candidate profile Should have experience in CI/CD set up using AWS Code Commit, AWS,Docker setup. Must have good trouble shooting skills using all AWS services. Understanding database operations with Postgresql. Perks and benefits To be disclosed post interview

Posted 2 months ago

Apply

6 - 11 years

8 - 14 Lacs

Pune

Work from Office

Naukri logo

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. About The Role Data engineers are responsible for building reliable and scalable data infrastructure that enables organizations to derive meaningful insights, make data-driven decisions, and unlock the value of their data assets. About The Role - Grade Specific The role support the team in building and maintaining data infrastructure and systems within an organization. Skills (competencies) Ab Initio Agile (Software Development Framework) Apache Hadoop AWS Airflow AWS Athena AWS Code Pipeline AWS EFS AWS EMR AWS Redshift AWS S3 Azure ADLS Gen2 Azure Data Factory Azure Data Lake Storage Azure Databricks Azure Event Hub Azure Stream Analytics Azure Sunapse Bitbucket Change Management Client Centricity Collaboration Continuous Integration and Continuous Delivery (CI/CD) Data Architecture Patterns Data Format Analysis Data Governance Data Modeling Data Validation Data Vault Modeling Database Schema Design Decision-Making DevOps Dimensional Modeling GCP Big Table GCP BigQuery GCP Cloud Storage GCP DataFlow GCP DataProc Git Google Big Tabel Google Data Proc Greenplum HQL IBM Data Stage IBM DB2 Industry Standard Data Modeling (FSLDM) Industry Standard Data Modeling (IBM FSDM)) Influencing Informatica IICS Inmon methodology JavaScript Jenkins Kimball Linux - Redhat Negotiation Netezza NewSQL Oracle Exadata Performance Tuning Perl Platform Update Management Project Management PySpark Python R RDD Optimization SantOs SaS Scala Spark Shell Script Snowflake SPARK SPARK Code Optimization SQL Stakeholder Management Sun Solaris Synapse Talend Teradata Time Management Ubuntu Vendor Management

Posted 2 months ago

Apply

2 - 7 years

4 - 9 Lacs

Maharashtra

Work from Office

Naukri logo

Description Job Summary As an AWS Cloud Administrator, you will play a pivotal role in executing the migration project of AWS tenant(s) for our organization. Your expertise in AWS services, infrastructure design, and migration strategies will be crucial in ensuring a seamless and efficient transition of our systems and applications from one tenant to another tenant of different OU/Account. Minimum 10 - 11 years hands on experience in implementing and administering AWS. Required Technical and Professional Expertise Experience of multiple AWS services like EC2, S3, lambda, SQS, SNS, CloudFront, VPC, AWS Direct Connect etc. Cloud migration experience using lift and shift method. Code collaboration and version control using tools such as Git, Bitbucket, Artifactory Container and machine deployment, using tools such as Docker and Kubernetes. Experience in configuration management and deployment automation using tools such as AWSCodeDeploy, Ansible, Puppet, and Chef Experience in Infrastructure-as-Code (IAC) methods and best practices using tools such as Terraform, and CloudFormation Creation and Modification of Cloud Formation Templates Knowledge in Groovy, yaml and Json. Planning, tracking, and managing of Agile software development projects using tools such as JIRA. Managing and maintaining Linux Severs and troubleshooting issues related to OS. Manage Server firewall rules and security. Patching the Linux Servers and mitigating vulnerabilities as per security reports and keeping the servers compliant. Scripting languages such as PowerShell/Shell/Bash. Good to have knowledge. Knowledge of network, server, database, and container architecture. Windows OS knowledge Other scripting skill like Python or JavaScript. Named Job Posting? (if Yes - needs to be approved by SCSC) Additional Details Global Grade C Level To Be Defined Named Job Posting? (if Yes - needs to be approved by SCSC) No Remote work possibility No Global Role Family To be defined Local Role Name To be defined Local Skills AWS;AWS EC2 Auto Scaling Languages RequiredENGLISH Role Rarity To Be Defined

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies