Jobs
Interviews

329 Container Orchestration Jobs - Page 8

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

10 - 15 Lacs

Hyderabad

Work from Office

Key Responsibilities: Design and implement end-to-end ML pipelines using tools like MLflow, KubeFlow, DVC, and Airflow. Develop and maintain CI/CD workflows using Jenkins, CircleCI, Bamboo, or DataKitchen. Containerize applications using Docker and orchestrate them with Kubernetes or OpenShift. Collaborate with data scientists to productionize ML models and ensure reproducibility and scalability. Manage data versioning and lineage using Pachyderm or DVC. Monitor model performance and system health using Grafana and other observability tools. Write robust and maintainable code in Python, Go, R, or Julia. Automate workflows and scripting using Shell Scripts. Integrate with cloud storage solutions like Amazon S3 and manage data in RDBMS such as Oracle, SQL Server, or open-source alternatives. Ensure compliance with data governance and security best practices. Required Skills and Qualifications: Strong programming skills in Python and familiarity with ML/DL libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with MLOps tools such as MLflow, KubeFlow, DVC, Airflow, or Pachyderm. Proficiency in Docker and container orchestration using Kubernetes or OpenShift. Experience with CI/CD tools like Jenkins, CircleCI, Bamboo, or DataKitchen. Familiarity with cloud storage and data management practices. Knowledge of SQL and experience with RDBMS (Oracle, SQL Server, or open-source). Experience with monitoring tools like Grafana. Strong understanding of DevOps and software engineering best practices.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 11 Lacs

Bengaluru

Work from Office

As a DevOps Engineer for the Developer Platform, you will develop features and services aimed at enhancing developer experience, productivity, and satisfaction within the CIO organization. This role requires strong automation skills, deep expertise in CI/CD pipelines, and proficiency in infrastructure management. Key Responsibilities: Design, develop, and deploy DevOps solutions to improve developer workflow efficiency. Build and manage CI/CD pipelines using tools such as Jenkins, Travis, and Git to enable automation. Develop, maintain, and optimize containerized applications using Docker, Kubernetes, and OpenShift. Write automation scripts using Python or Shell Scripting to enhance development operations. Oversee Linux system administration, with a preference for Debian and RHEL distributions. Implement Infrastructure as Code (IaC) principles using tools such as Ansible for automation. Collaborate on highly scalable platforms and distributed applications to drive system efficiency. Utilize methodologies to improve quality standards and automation within the development ecosystem. Troubleshoot and resolve complex DevOps challenges with strong analytical and problem-solving skills. Required education Bachelor's Degree Required technical and professional expertise 5+ years of experience in development, automation, and DevOps implementation. Strong problem-solving and troubleshooting abilities, especially in high-availability systems. Hands-on experience with CI/CD pipeline development using Jenkins, Travis, and Git. Expertise in containerization technologies, including Docker, Kubernetes, and OpenShift. Proficiency in Python or Shell Scripting for automation and configuration management. Solid Linux administration skills, with a focus on Debian and RHEL environments. Familiarity with Infrastructure as Code (IaC) and automation tools like Ansible. Preferred technical and professional experience Experience working with highly scalable platforms and distributed application concepts. Knowledge of advanced automation methodologies for improving quality standards. Ability to design and maintain secure and optimized cloud infrastructure. Strong collaboration skills with cross-functional engineering teams.

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 6 Lacs

Bengaluru

Work from Office

Project Role : Operations Engineer Project Role Description : Support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Must have skills : Kubernetes Good to have skills : Linux, Ansible on Microsoft AzureMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Operations Engineer, you will support the operations and/or manage delivery for production systems and services based on operational requirements and service agreement. Your day will involve ensuring seamless operations and timely service delivery, contributing to system enhancements, and collaborating with cross-functional teams to meet service level agreements. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Ensure smooth operations and timely service delivery.- Collaborate with cross-functional teams to enhance system performance.- Implement best practices for system maintenance and optimization.- Troubleshoot and resolve operational issues efficiently.- Contribute to the development and implementation of operational strategies. Professional & Technical Skills: - Must To Have Skills: Proficiency in Kubernetes.- Strong understanding of containerization technologies.- Experience with cloud platforms like Microsoft Azure.- Hands-on experience with Linux operating systems.- Knowledge of automation tools like Ansible. Additional Information:- The candidate should have a minimum of 3 years of experience in Kubernetes.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Red Hat OpenShift Good to have skills : Laboratory Information and Execution SystemsMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with organizational goals, addressing challenges that arise during the development process, and providing guidance to team members to foster a productive work environment. You will also engage in strategic discussions to enhance application performance and user experience, ensuring that the applications meet the needs of stakeholders effectively. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Red Hat OpenShift.- Good To Have Skills: Experience with Laboratory Information and Execution Systems.- Strong understanding of container orchestration and management.- Experience with application deployment and scaling in cloud environments.- Familiarity with CI/CD pipelines and DevOps practices. Additional Information:- The candidate should have minimum 7.5 years of experience in Red Hat OpenShift.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Python (Programming Language) Good to have skills : GitHubMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, while also focusing on cloud technologies, container orchestration, and security measures. You will engage in building and testing end-to-end CI/CD pipelines, ensuring that systems are secure against potential threats, and collaborating with various teams to enhance operational efficiency. Roles & Responsibilities:Expected to be an SME.Collaborate and manage the team to perform.Responsible for team decisions.Engage with multiple teams and contribute on key decisions.Provide solutions to problems for their immediate team and across multiple teams.Facilitate knowledge sharing sessions to enhance team capabilities.Monitor and optimize the performance of CI/CD pipelines. Professional & Technical Skills: Experience and strong skills in Python and scripting (eg. Bash).Strong data structures, design, algorithms, coding skills, analytical and problem solving skillsExperience with Cloud native services is mustExperience in development of solution using public cloud APIs is must Experience working on Linux platformFamiliarity in storage, filesystems, object storage is a huge plusKnowledge in anyone of the databases like HANA, Sybase ASE, MAXDB, DB2 and MSSQL are plusAbility to drive tasks to completion and take ownership of projectsAbility to work in a fast paced and agile development environmentComfortable in using tools - JIRA, Github Additional Information:The candidate should have minimum 5 years of experience in Python (Programming Language).This position is based at our Bengaluru office (Flexible).A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Microsoft Power Business Intelligence (BI) Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, while also focusing on cloud technologies, container orchestration, and security measures. You will engage in building and testing end-to-end CI/CD pipelines, ensuring that systems are secure against potential threats, and collaborating with various teams to enhance operational efficiency. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior professionals to enhance their skills and knowledge.- Continuously evaluate and improve existing processes to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Power Business Intelligence (BI).- Strong understanding of CI/CD practices and tools.- Experience with cloud technologies and container orchestration platforms.- Knowledge of security best practices in software development.- Familiarity with scripting languages for automation. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Power Business Intelligence (BI).- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Python (Programming Language) Good to have skills : GitHubMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, while also focusing on cloud technologies, container orchestration, and security measures. You will engage in building and testing end-to-end CI/CD pipelines, ensuring that systems are secure against potential threats, and collaborating with various teams to enhance operational efficiency. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor and optimize the performance of CI/CD pipelines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Python (Programming Language).- Good To Have Skills: Experience with GitHub.- Experience and strong skills in Python and scripting (eg. Bash).- Strong data structures, design, algorithms, coding skills, analytical and problem solving skills- Experience with Cloud native services is must- Experience in development of solution using public cloud APIs is must - Experience working on Linux platform- Familiarity in storage, filesystems, object storage is a huge plus- Knowledge in anyone of the databases like HANA, Sybase ASE, MAXDB, DB2 and MSSQL are plus- Ability to drive tasks to completion and take ownership of projects- Ability to work in a fast paced and agile development environment- Comfortable in using tools - JIRA, Github Additional Information:- The candidate should have minimum 5 years of experience in Python (Programming Language).- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 8.0 years

14 - 18 Lacs

Noida

Work from Office

Role overview: As a Cloud & DevOps Engineer, you will be responsible for implementing and managing AWS infrastructure for applications using AWS CDK and building GitHub Actions pipelines to deploy containerized applications on ECS/EKS. Must-to-Have: Hands-on experience with AWS CDK for provisioning infrastructure Solid understanding of key AWS services: ECS (Fargate), API Gateway, ALB/NLB, IAM, S3, KMS, Security Groups Strong experience with Cloud Platform engineering & DevOps Proficiency in building GitHub Actions workflows for build, containerization, and deployment Strong knowledge of Docker, container lifecycle, and CI/CD practices Understanding of basic networking (VPC, subnets, SGs) Familiarity with artifact and image management (ECR, GitHub Packages) Comfortable working in Agile or DevOps-centric environments Good-to-Have: Experience with CDK Pipelines and multi-stage deployments Exposure to GitHub Actions secrets, OIDC-based role assumption Scripting skills in Python, Bash, or Shell for automation tasks Familiarity with AWS CodeBuild or CodePipeline as alternatives Knowledge of Container orchestration in AWS, ECS and EKS for future migration planning Understanding of compliance/security frameworks and audit requirements Mandatory Competencies Cloud - AWS Data on Cloud - AWS S3 Cloud - ECS DevOps - CI/CD DevOps - Docker Beh - Communication

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Kubernetes Good to have skills : Docker Kubernetes Administration, DevOps for SAPMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your expertise in continuous integration, delivery, and deployment, as well as cloud technologies and container orchestration. You will work on ensuring that systems are secure against potential threats while building and testing end-to-end CI/CD pipelines. Your role will require collaboration with various teams to enhance the development process and improve overall system performance. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior professionals to enhance their skills and knowledge.- Continuously evaluate and improve existing processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Kubernetes.- Good To Have Skills: Experience with Docker Kubernetes Administration, DevOps for SAP.- Strong understanding of continuous integration and continuous deployment methodologies.- Experience with cloud service providers such as AWS, Azure, or Google Cloud.- Familiarity with containerization technologies and orchestration tools. Additional Information:- The candidate should have minimum 5 years of experience in Kubernetes.- This position is based in Bengaluru.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 10 Lacs

Pune

Work from Office

Project Role : DevOps Engineer Project Role Description : Responsible for building and setting up new development tools and infrastructure utilizing knowledge in continuous integration, delivery, and deployment (CI/CD), Cloud technologies, Container Orchestration and Security. Build and test end-to-end CI/CD pipelines, ensuring that systems are safe against security threats. Must have skills : Kubernetes Good to have skills : Docker Kubernetes Administration, DevOps for SAPMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a DevOps Engineer, you will be responsible for building and setting up new development tools and infrastructure. A typical day involves utilizing your knowledge in continuous integration, delivery, and deployment, as well as cloud technologies and container orchestration. You will work on ensuring that systems are secure against potential threats while building and testing end-to-end CI/CD pipelines. Your role will require collaboration with various teams to enhance the development process and improve overall system efficiency. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior professionals to enhance their skills and knowledge in DevOps practices.- Continuously evaluate and implement new tools and technologies to improve the development and deployment processes. Professional & Technical Skills: - Must To Have Skills: Proficiency in Kubernetes.- Good To Have Skills: Experience with Docker Kubernetes Administration, DevOps for SAP.- Strong understanding of continuous integration and continuous deployment methodologies.- Experience with cloud service providers such as AWS, Azure, or Google Cloud.- Familiarity with security best practices in software development and deployment. Additional Information:- The candidate should have minimum 5 years of experience in Kubernetes.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

15.0 - 20.0 years

3 - 7 Lacs

Gurugram

Work from Office

Project Role : Application Support Engineer Project Role Description : Act as software detectives, provide a dynamic service identifying and solving issues within multiple components of critical business systems. Must have skills : Google Kubernetes Engine Good to have skills : Kubernetes, Google Cloud Compute ServicesMinimum 2 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Support Engineer, you will act as software detectives, providing a dynamic service that identifies and resolves issues within various components of critical business systems. Your typical day will involve collaborating with team members to troubleshoot problems, analyzing system performance, and ensuring the smooth operation of applications. You will engage with stakeholders to understand their needs and provide timely solutions, all while maintaining a focus on enhancing system reliability and user satisfaction. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the development and implementation of best practices for application support.- Monitor system performance and proactively identify areas for improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Google Kubernetes Engine.- Good To Have Skills: Experience with Kubernetes, Google Cloud Compute Services.- Strong understanding of container orchestration and management.- Familiarity with cloud infrastructure and deployment strategies.- Experience in troubleshooting and resolving application issues. Additional Information:- The candidate should have minimum 2 years of experience in Google Kubernetes Engine.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Qualifications Red Hat Certified/ L2 Linux Administrator experience more than 5 to 10 Years L1 Red Hat OpenShift Administrator experience more than 2Years to 5 Years Hands on experience in VMware Virtualization , Windows , Linux , Networking , NSX-T , VRA , Open Shift. Experience in administering Red Hat OpenShift and virtualization technologies. Strong understanding of Kubernetes and container orchestration. Proficiency in virtualization platforms such as Red Hat Virtualization (RHV), VMware, or KVM. Experience with automation/configuration management using Ansible, Puppet, Chef, or similar. Familiarity with CI/CD tools and version control systems like Git. Knowledge of Linux/Unix administration and scripting (e.g., Bash, Python). Experience with monitoring solutions like Prometheus, Grafana, or Nagios. Strong problem-solving skills and the ability to work under pressure

Posted 1 month ago

Apply

4.0 - 8.0 years

17 - 22 Lacs

Bengaluru

Work from Office

FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity As a Software Engineer with a strong DevOps mindset, you will join a world-class DevOps Engineering Enablement team working to empower the product and engineering organization to deliver innovative solutions at scale. This hybrid role blends software engineering excellence with DevOps automation, infrastructure-as-code, and cloud-native development practices to create and optimize the Engineering System that powers FICOs platform. You will collaborate with product engineers, automation and quality SMEs, infrastructure and security teams, and business stakeholders to design, build, and operate high-performing, resilient, and scalable systems that accelerate product delivery and operational excellence. DevOps Engineering Enablement-Director What Youll Contribute Design, develop, and maintain software and infrastructure components that drive automation, deployment, and observability across FICOs platform. Work closely with DevOps engineering leads to define and evolve engineering system architecture and delivery patterns. Apply modern software engineering practices to continuously improve CI/CD, test automation, and production deployment strategies. Build resilient infrastructure using infrastructure-as-code (Terraform, CloudFormation) and platform engineering tools (ArgoCD, Crossplane, Backstage). Partner with DevOps enablement SMEs (Security, Quality, Automation) to drive adoption of best practices across product teams. Implement monitoring, alerting, and reliability engineering principles into every solution, ensuring production-grade systems with measurable SLAs. Analyze data to identify bottlenecks or improvement areas and make recommendations to drive developer productivity and platform reliability. Create clear documentation, developer self-service tools, and platform features to reduce operational overhead and increase autonomy. Act as an advocate for DevOps cultureencouraging collaboration, experimentation, and continual learning within the team and broader org. Provide production support for infrastructure and deployment pipelines, ensuring quick resolution of incidents and root cause analysis. Collaborate across global teams to define and enforce standards that scale with the business, reduce tech debt, and improve team velocity. What Were Seeking Combined experience in software development and DevOps engineering. Strong programming skills in Python, Golang, or Java with a solid understanding of software engineering principles. Practical experience with CI/CD tools such as Jenkins, GitHub Actions, Bitbucket Pipelines , and infrastructure automation . Hands-on expertise with Docker, Kubernetes, and Helm for container orchestration and microservices architecture. Cloud proficiency with AWS , Azure , or GCP , including designing and managing cloud-native services. Familiarity with platform engineering tools such as ArgoCD, Terraform, Backstage, Crossplane is highly preferred. Strong problem-solving, debugging, and troubleshooting skills, particularly in production environments. Passion for enabling others through automation, tooling, and standardized systems. Excellent communication skills (written and verbal) with ability to work in a globally distributed, asynchronous environment. Bachelors degree in Computer Science, Engineering, or related field (Masters degree a plus). Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 1 month ago

Apply

6.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity As a Software Engineer with a strong DevOps mindset, you will join a world-class DevOps Engineering Enablement team working to empower the product and engineering organization to deliver innovative solutions at scale. This hybrid role blends software engineering excellence with DevOps automation, infrastructure-as-code, and cloud-native development practices to create and optimize the Engineering System that powers FICOs platform. You will collaborate with product engineers, automation and quality SMEs, infrastructure and security teams, and business stakeholders to design, build, and operate high-performing, resilient, and scalable systems that accelerate product delivery and operational excellence. DevOps Engineering Enablement-Director What Youll Contribute Design, develop, and maintain software and infrastructure components that drive automation, deployment, and observability across FICOs platform. Work closely with DevOps engineering leads to define and evolve engineering system architecture and delivery patterns. Apply modern software engineering practices to continuously improve CI/CD, test automation, and production deployment strategies. Build resilient infrastructure using infrastructure-as-code (Terraform, CloudFormation) and platform engineering tools (ArgoCD, Crossplane, Backstage). Partner with DevOps enablement SMEs (Security, Quality, Automation) to drive adoption of best practices across product teams. Implement monitoring, alerting, and reliability engineering principles into every solution, ensuring production-grade systems with measurable SLAs. Analyze data to identify bottlenecks or improvement areas and make recommendations to drive developer productivity and platform reliability. Create clear documentation, developer self-service tools, and platform features to reduce operational overhead and increase autonomy. Act as an advocate for DevOps cultureencouraging collaboration, experimentation, and continual learning within the team and broader org. Provide production support for infrastructure and deployment pipelines, ensuring quick resolution of incidents and root cause analysis. Collaborate across global teams to define and enforce standards that scale with the business, reduce tech debt, and improve team velocity. What Were Seeking Combined experience in software development and DevOps engineering. Strong programming skills in Python, Golang, or Java with a solid understanding of software engineering principles. Practical experience with CI/CD tools such as Jenkins, GitHub Actions, Bitbucket Pipelines , and infrastructure automation . Hands-on expertise with Docker, Kubernetes, and Helm for container orchestration and microservices architecture. Cloud proficiency with AWS , Azure , or GCP , including designing and managing cloud-native services. Familiarity with platform engineering tools such as ArgoCD, Terraform, Backstage, Crossplane is highly preferred. Strong problem-solving, debugging, and troubleshooting skills, particularly in production environments. Passion for enabling others through automation, tooling, and standardized systems. Excellent communication skills (written and verbal) with ability to work in a globally distributed, asynchronous environment. Bachelors degree in Computer Science, Engineering, or related field (Masters degree a plus). Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 1 month ago

Apply

3.0 - 8.0 years

20 - 25 Lacs

Noida

Work from Office

Company: Mercer Description: Application Management Services AMSs mission is to maximize the contributions of MMC Technology as a business-driven, future-ready and competitive function by reducing the time and cost spent managing applications AMS , Business unit of Marsh McLennan is seeking candidates for the following position based in the Gurgaon/Noida office: Principal Engineer Kubernetes Platform Engineer Position overview We are seeking a skil led Kubernetes Platform Engineer with strong background in Cloud technologies (AWS, Azure) to manage, configure, and support Kubernetes infrastructure in a dynamic, high-availability environment. The Engineer collaborates with Development, DevOps and other technology teams to ensure that the Kubernetes platform ecosys tem is reliable, scalable and efficient. The ideal candidate must possess hands- on experience in Kubernetes clusters operations management, and container orchestration, along with strong problem-solving skills. Experience in infrastructure platform management is required. Responsibilities Implement and maintain platform services in Kubernetes infrastructure. Perform upgrades and patch management for Kubernetes and its associated components (not limited to API management system ) are expected and required. Monitor and optimize Kubernetes resources, such as pods, nodes, and namespaces. Implement and enforce Kubernetes security best practices, including RBAC, network policies, and secrets management. Work with the security team to ensure container and cluster compliance with organizational policies. Troubleshoot and resolve issues related to Kuber netes infrastructure in a timely manner. Provide technical guidance and support to developers and DevOps teams. Maintain detailed documentation of Kubernetes configurations and operational processes. Maintain and support of Ci/CD pipelines are not part of the support scope of this position . Preferred skills and experience At least 3 years of experience in managing and supporting Kubernetes clusters at platform operation layer , and its ecosystem. At least 2 years of infrastructure management and support, not limited to SSL certificate, Virtual IP. Proficiency in managing Kubernetes clusters using tools such as ` kubectl `, Helm, or Kustomize . In-depth knowledge and experience of container technologies, including Docker. Experience with cloud platforms (AWS, GCP, Azure) and Kubernetes services (EKS, GKE, AKS). Understanding of infrastructure-as-code ( IaC ) tools such as Terraform or CloudFormation. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of centralized logging systems like Fluentd , Logstash, or Loki. Proficiency in scripting languages (e.g., Bash, Python, or Go). Experience in supporting Public Cloud or hybrid cloud environments. Marsh McLennan (NYSEMMC) is the worlds leading professional services firm in the areas ofrisk, strategy and people. The Companys 85,000 colleagues advise clients in 130 countries.With annual revenue of over $20 billion, Marsh McLennan helps clients navigate an increasingly dynamic and complex environment through four market-leading businesses.Marsh advisesindividual and commercial clients of all sizes on insurance broking and innovative risk managementsolutions.Guy Carpenter develops advanced risk, reinsurance and capital strategies that help clientsgrow profitably and pursue emerging opportunities.Mercer delivers advice and technology-drivensolutions that help organizations redefine the world of work, reshape retirement and investmentoutcomes, and unlock health and wellbeing for a changing workforce.Oliver Wyman serves as a critical strategic, economic and brand advisor to private sector and governmental clients. For more information, visit marshmclennan.com , or follow us onLinkedIn andTwitter Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people regardless of their sex/gender, marital or parental status, ethnic origin, nationality, age, background, disability, sexual orientation, caste, gender identity or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person. Marsh McLennan (NYSEMMC) is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit marshmclennan.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.

Posted 1 month ago

Apply

8.0 - 13.0 years

9 - 14 Lacs

Bengaluru

Work from Office

The Applied R&D Engineer conducts target-oriented research to directly apply findings to the specification, design, further development, and incremental improvement of products, services, systems, tools, processes, etc. Integrates, verifies, tests, and modifies SW / HW / system components and capitalises on innovative solutions to meet particular requirements and specifications. You have: Bachelor's or Masters degree in Electronics, Computer Science, Electrical Engineering, or a related field with 8+ years of work experience. Experience in cloud-native architecture, cloud security, and cloud design patterns. Expertise in container orchestration using Kubernetes, Helm, and OpenShift. Experience with API Gateway, Kafka Messaging, and Component Life Cycle Management. Expertise in Linux platform, including Linux Containers, Namespaces, and CGroups. Experience in scripting languages Perl / Python and CI/CD tools Jenkins, Git, Helm, and Ansible. It would be nice if you also had: Familiarity with open-source PaaS environments like OpenShift. Knowledge of Elastic Stack, Keycloak authentication, and security best practices. You will design and develop software components based on cloud-native principles and leading PaaS platforms. You will implement scalable, secure, and resilient microservices and cloud-based applications. You will develop APIs and integrate with API gateways, message brokers (Kafka), and containerized environments. You will apply design patterns, domain-driven design (DDD), component-based architecture, and evolutionary architecture principles. You will lead the end-to-end development of features and EPICs, ensuring high performance and scalability. You will define and implement container management strategies, leveraging Kubernetes, OpenShift, and Helm

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 9 Lacs

Hyderabad

Work from Office

"Spark & Delta Lake Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Other Skills: Understanding of data warehouse concepts, data modelling techniques. Expertise in Git for code management. Familiarity with CI/CD pipelines and containerization technologies. Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio"

Posted 1 month ago

Apply

7.0 - 12.0 years

6 - 9 Lacs

Hyderabad

Work from Office

Understanding of Spark core concepts like RDDs, DataFrames, DataSets, SparkSQL and Spark Streaming. Experience with Spark optimization techniques. Deep knowledge of Delta Lake features like time travel, schema evolution, data partitioning. Ability to design and implement data pipelines using Spark and Delta Lake as the data storage layer. Proficiency in Python/Scala/Java for Spark development and integrate with ETL process. Knowledge of data ingestion techniques from various sources (flat files, CSV, API, database) Understanding of data quality best practices and data validation techniques. Other Skills: Understanding of data warehouse concepts, data modelling techniques. Expertise in Git for code management. Familiarity with CI/CD pipelines and containerization technologies. Nice to have experience using data integration tools like DataStage/Prophecy/Informatica/Ab Initio"

Posted 1 month ago

Apply

7.0 - 12.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Job Title: Cloud Platform Engineer Location: Bengaluru, India Experience Level: 7 + years Employment Type: Full-time About the Role We are seeking a highly skilled and automation-driven Cloud Platform Engineer to design, implement, and manage scalable cloud landing zones and infrastructure solutions across Azure and GCP. You will play a key role in building reusable modules, enforcing governance through policy-as-code, and enabling CI/CD pipelines to support diverse workloads including software applications, data platforms, and AI solutions. Key Responsibilities Design and implement Cloud Landing Zones using Azure CAF and GCP Project Factory Develop reusable Terraform and Bicep modules for infrastructure provisioning Build and maintain CI/CD pipelines using Azure DevOps and GitHub Actions Enforce guardrails and governance using policy-as-code frameworks Collaborate with application and data teams to support workload onboarding Automate infrastructure tasks using Python and Bash scripting Ensure security, scalability, and compliance across cloud environments Perform code reviews, maintain documentation, and contribute to best practices Required Skills Strong hands-on experience with Terraform (must) and Bicep Proficiency in CI/CD tools : Azure DevOps, GitHub Actions Experience with Azure Landing Zones and GCP Project Factory Solid scripting skills in Python and Bash Familiarity with policy-as-code tools (e.g., Azure Policy, Sentinel, OPA) Understanding of cloud architecture, networking, and security principles Automation mindset with strong problem-solving and critical thinking skills Preferred Qualifications Certifications: Azure Solutions Architect, Google Cloud Architect, Terraform Associate Experience with container orchestration (e.g., Kubernetes) Exposure to multi-cloud environments and hybrid cloud setups Required Skills

Posted 1 month ago

Apply

4.0 - 9.0 years

13 - 18 Lacs

Pune

Work from Office

: Job TitleLead Apache Hadoop Engineer LocationPune, India Corporate TitleVice President Role Description Technology serves as the foundation of our entire organization. Our Technology, Data, and Innovation (TDI) strategy aims to enhance engineering capabilities, implement an agile delivery framework, and modernize the bank's IT infrastructure. We are committed to investing in and developing a team of forward-thinking technology professionals, offering them the training, autonomy, and opportunities necessary to engage in groundbreaking work. As a Lead Engineer, you will oversee the comprehensive delivery of engineering solutions to meet business objectives. You will have deep expertise in Hadoop ecosystem administration, scripting, and managing complex, large-scale migration projects. You will provide engineering leadership across various teams, mentor junior engineers, and promote ongoing enhancements in delivery methodologies. You will have the opportunity to collaborate closely with clients while being part of a larger, creative, and innovative team dedicated to making a significant impact. In RiskFinder, you will join a local team, collaborating with other teams both in your area and in different geographical locations. You will have the chance to architect, design, and implement an open-source big data platform, enabling clients to produce, access, and analyze extensive datasets through our custom components and applications. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Develop robust architectures and designs for big data platform and applications within the Apache Hadoop ecosystem Implement and deploy big data platform and solutions on-premises and in hybrid cloud environments. Develop and execute migration scripts, workflows and automation tools to streamline the transition process. Read, understand, and modify open-source code to implement bug fixes and perform upgrades. Security ArchitectureEnsure all solutions adhere to security best practices and compliance requirements. Mitigate risks by identifying potential challenges (e.g., data loss, compatibility issues) and implementing contingency plans. Work directly with the Lead Architect and support cross-functional teams globally. Strategy DevelopmentContribute to defining the end-to-end platform engineering and application migration strategy. Your skills and experience Proven experience in architecting, designing, building, and deploying big data platforms and applications using the Apache Hadoop ecosystem in hybrid cloud and private cloud scenarios. Experience with hybrid cloud big data platform designs and deployments, especially in AWS, Azure, or GCP. Experience in large-scale data platform builds and application migrations. Expert knowledge of Apache Hadoop ecosystem and associated Apache projects (e.g., HDFS, Hive, HBase, Spark, Kafka, Yarn etc.). Proficiency in Kubernetes for container orchestration. Ability to read and modify open-source code. Experience with version upgrades of technology stacks. Excellent problem-solving and analytical skills. Strong communication and collaboration skills to work effectively with global teams. Ability to work independently and take initiative. Contributions to open-source projects. How well support you

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Chennai

Work from Office

Qualifications & Skills: 4+ years of Experiance in React JS Proficiency in React.jswith strong knowledge of JavaScript, TypeScript, and modern front-end frameworks. Experience in micro-frontends, state management libraries (Redux, Context API, etc.) and their implementation. Hands-on experience in backend development, preferably using Node.js, .NET, or any microservices-based architecture. Expertise in CI/CD toolsand containerization technologies (Docker, Kubernetes, AKS). Experience in cloud platformssuch as Azure, AWS, or Google Cloud. Strong debugging skillswith a structured approach to resolving complex bugs. Database expertise, including working with SQL, NoSQL, and performance optimization techniques. Solid understanding of testing frameworkssuch as Jest, Cypress, Jasmine, or Protractorfor unit and integration testing. Ability to collaborate effectivelyin a fast-paced environment and communicate technical solutions clearly. Mandatory Skills: Fullstack MERN.

Posted 1 month ago

Apply

3.0 - 4.0 years

17 - 20 Lacs

Chennai

Work from Office

Overview Position Overview Annalect is currently seeking a back-end developer to join our Technology team. In this role, you will help grow our microservices and API layer which sit atop our Big Data infrastructure. We are passionate about building distributed back-end systems in a modular and reusable way. We're looking for people who have a shared passion for data and desire to build cool, maintainable and high-quality applications to use this data. In this role you will participate in shaping our technical architecture, design and development of software products, collaborate with back-end developers from other tracks, research and evaluation of new technical solutions, and helping to elevate the skills of more junior developers. Responsibilities Key Responsibilities: Designing, building, testing and deploying scalable, reusable and maintainable applications that handle large amounts of data Growing our API layer: author, update, and debug API microservices; contribute to API design and architecture Perform code reviews and provide leadership and guidance to junior developers Ability to learn and teach new technologies Qualifications Required Skills 5+ years of solid coding experience working in Java Demonstrated proficiency with RESTful APIs (data caching, JWT auth, API load testing, RAML), and production use of a Spring / Springboot API framework Fluency with Linux/Unix Systems and in bash Excellent grasp of microservices, containers (Docker), container orchestration (Kubernetes), serverless computing (AWS Lambda) and distributed/scalable systems Proven history of mentoring junior developers to improve overall team effectiveness Passion for writing good documentation and creating architecture diagrams Experience processing and analyzing large data sets. Extensive history working with data and databases (SQL, MySQL, PostgreSQL, Amazon Aurora, Redis, Amazon Redshift, Google BigQuery) Rigorous approach to testing (unit testing, functional testing, integration testing) Understanding of critical API security best practices Ability to profile, identify, debug, and fix performance bottlenecks in application and database layers with modern tooling Strong proficiency in conducting PR reviews and helping to maintain a high-quality code base Knowledge of git, with understanding of branching, how to manage conflicts, and pull requests Additional Skills Experience with Javascript or other languages (Go, C/C++,) is welcome, but not required. This position requires strong backend development skills, but if you are familiar with contemporary web-application design (Web-components, react,Js) it’s great Knowledge of AWS or other cloud environments (GCP/Azure) a plus Passion for data-driven software. All our tools are built on top of data and require work with data

Posted 1 month ago

Apply

12.0 - 16.0 years

3 - 15 Lacs

Hyderabad, Telangana, India

On-site

Seasoned client facing, senior consultant that has advised IT executives on cloud operating models e.g. IT processes and tools Experienced with knowledge to reduce the day to day noise and toil of IT support and improve the availability of the client s application suite via new support methods, scripting automation and advanced new tooling. Experience working closely with production operations, application developers, system, network, middleware and database administrators to streamline development, operations and support processes Adept at analysing and problem solving and preferably have a blend of platform, middleware, network and software development skills Experience with consulting methodologies, knowledge management and service offering development (to assist in building cloud practice offerings from sales through delivery) Apply consulting and engineering skills to solve operations problems by: Defining and driving initiatives to increase the client s overall application availability Building tooling needed to improve observability of performance and operations efficiency Enhancing monitoring and management tooling to better detect, diagnose, and correct problems Resolution of problems in code for an incident, when applicable Documenting defects to communicate back to the Service Owner(s) Participate with application developers to develop new features and automation to solve operational challenges Driving the transformation of delivery methods into the operational teams such as network, database, system administrators, Incident management Enabling an AIOps strategy and roadmap to drive more predictive and automated response Investigate RCA resolution to get to, and correct, the source of issues and outages. Essential functions Ideally a former Developer who knows how to troubleshoot applications transactions end to end and critical points of failure or bottlenecks. DevOps/GitOps mindset with a vision for AIOPs (how AI can automate analysis, assignments, decisions and actions to support and operate a platform and application) Cloud Native dashboarding & alerting. (minimally familiar with AWS, GCP and Azure with depth in at least 1) Experience with scalable architectures and performance tuning. Enjoy solving difficult engineering problems, approach troubleshooting systematically, and comfortable getting hands-on to guide engineers and operators Great communication and planning experience ideally with large consultancy background Ability to own all or part of an assessment to develop recommendations and a roadmap Solid understanding of ITSM and ITIL principles with focus on Event, Incident, Problem, change and Configuration Management - and ability to lead assessments of maturity Nice to have software engineering skills ideally with experience in Python, Go and/or Java Understanding of large-scale complex systems from a reliability perspective Passion for resolving reliability issues and identify strategies to mitigate going forward Implemented High Availability & Disaster Recovery Infrastructure in the cloud. Experience with self-healing infrastructure. Adhering infrastructure to business SLAs and SLOs and managed Error Budgets. Qualifications MUST HAVE (hands on with at least one): Dynatrace, Big Panda, Datadog or New Relic Highly desired hands on: Grafana, ELK Stack, Prometheus, Splunk, and cloud native tools for alerting and logging Knowledge of required and preferred to have some hands on: Kubernetes, Terraform, Python, GCP/AWS/Azure Would be a plus MUST HAVE (hands on with at least one): Dynatrace, Big Panda, Datadog or New Relic Highly desired hands on: Grafana, ELK Stack, Prometheus, Splunk, and cloud native tools for alerting and logging Knowledge of required and preferred to have some hands on: Kubernetes, Terraform, Python, GCP/AWS/Azure

Posted 1 month ago

Apply

5.0 - 10.0 years

30 - 35 Lacs

Mumbai

Work from Office

About the Job: Be the expert customers turn to when they need to build strategic, scalable systems. Red Hat Services is looking for a well-rounded Architect to join our team in Mumbai covering Asia Pacific. In this role, you will design and implement modern platforms, onboard and build cloud-native applications, and lead architecture engagements using the latest open source technologies. Youll be part of a team of consultants who are leaders in open hybrid cloud, platform modernisation, automation, and emerging practices - including foundational AI integration. Working in agile teams alongside our customers, youll build, test, and iterate on innovative prototypes that drive real business outcomes. This role is ideal for architects who can work across application, infrastructure, and modern AI-enabling platforms like Red Hat OpenShift AI. If you're passionate about open source, building solutions that scale, and shaping the future of how enterprises innovate this is your opportunity. What will you do: Design and implement modern platform architectures with a strong understanding of Red Hat OpenShift, container orchestration, and automation at scale. Strong experience in managing Day-2 operations of Kubernetes container platforms by collaborating with infrastructure teams in defining practices for platform deployment, platform hardening, platform observability, monitoring and alerting, capacity management, scalability, resiliency, security operations. Lead the discovery, architecture, and delivery of modern platforms and cloud-native applications, using technologies such as containers, APIs, microservices, and DevSecOps patterns. Collaborate with customer teams to co-create AI-ready platforms, enabling future use cases with foundational knowledge of AI/ML workloads. Remain hands-on with development and implementation especially in prototyping, MVP creation, and agile iterative delivery. Present strategic roadmaps and architectural visions to customer stakeholders, from engineers to executives. Support technical presales efforts, workshops, and proofs of concept, bringing in business context and value-first thinking. Create reusable reference architectures, best practices, and delivery models, and mentor others in applying them. Contribute to the development of standard consulting offerings, frameworks, and capability playbooks. What will you bring Strong experience with Kubernetes, Docker, and Red Hat OpenShift or equivalent platforms In-depth expertise in managing multiple Kubernetes clusters across multi-cloud environments. Proven expertise in operationalisation of Kubernetes container platform through the adoption of Service Mesh, GitOps principles, and Serverless frameworks Migrating from XKS to OpenShift Proven leadership of modern software and platform transformation projects Hands-on coding experience in multiple languages (e.g., Java, Python, Go) Experience with infrastructure as code, automation tools, and CI/CD pipelines Practical understanding of microservices, API design, and DevOps practices Applied experience with agile, scrum, and cross-functional team collaboration Ability to advise customers on platform and application modernisation, with awareness of how platforms support emerging AI use cases. Excellent communication and facilitation skills with both technical and business audiences Willingness to travel up to 40% of the time Nice to Have Experience with Red Hat OpenShift AI, Open Data Hub, or similar MLOps platforms Foundational understanding of AI/ML, including containerized AI workloads, model deployment, open source AI frameworks Familiarity with AI architectures (e.g., RAG, model inference, GPU-aware scheduling) Engagement in open source communities or contributor background About Red Hat Red Hat is the worlds leading provider of enterprise open source software solutions, using a community-powered approach to deliver high-performing Linux, cloud, container, and Kubernetes technologies. Spread across 40+ countries, our associates work flexibly across work environments, from in-office, to office-flex, to fully remote, depending on the requirements of their role. Red Hatters are encouraged to bring their best ideas, no matter their title or tenure. We're a leader in open source because of our open and inclusive environment. We hire creative, passionate people ready to contribute their ideas, help solve complex problems, and make an impact. Inclusion at Red Hat Red Hats culture is built on the open source principles of transparency, collaboration, and inclusion, where the best ideas can come from anywhere and anyone. When this is realized, it empowers people from different backgrounds, perspectives, and experiences to come together to share ideas, challenge the status quo, and drive innovation. Our aspiration is that everyone experiences this culture with equal opportunity and access, and that all voices are not only heard but also celebrated. We hope you will join our celebration, and we welcome and encourage applicants from all the beautiful dimensions that compose our global village. Equal Opportunity Policy (EEO) Red Hat is proud to be an equal opportunity workplace and an affirmative action employer. We review applications for employment without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, ancestry, citizenship, age, veteran status, genetic information, physical or mental disability, medical condition, marital status, or any other basis prohibited by law. Red Hat supports individuals with disabilities and provides reasonable accommodations to job applicants. If you need assistance completing our online job application, email application-assistance@redhat.com . General inquiries, such as those regarding the status of a job application, will not receive a reply.

Posted 1 month ago

Apply

5.0 - 8.0 years

20 - 30 Lacs

Pune

Work from Office

Overview We are seeking a Sr. DevOps Engineer to join the Critical Start Technologies Private Ltd. team, operating under the Critical Start umbrella, for our India operations. We are seeking a Senior DevOps Engineer with 5+ years of deep, hands-on experience in designing and managing scalable, secure, and high-performing infrastructure using modern DevOps practices. The ideal candidate is a thought leader who combines strong technical expertise in AWS, Terraform, CI/CD, and container orchestration with a passion for automation, reliability, and mentoring others. You thrive in fast-paced environments, proactively drive improvements, and bring a strategic mindset to infrastructure design and operations. You’ve helped scale systems across multiple teams or business units and are trusted to set standards and influence technical direction across the engineering organization. Responsibilities As a Senior DevOps Engineer, you will lead the design, implementation, and optimization of cloud infrastructure and automation strategies. You’ll collaborate cross-functionally to guide the delivery of production-ready systems that are reliable, secure, and scalable. Your responsibilities include: Architecting complex infrastructure using Terraform (modular, reusable, and tested), and implementing advanced IaC principles (DRY, parameterization, GitOps). Defining and driving DevOps strategy across environments – staging, production, blue/green deployments, auto-scaling, and rollback mechanisms. Designing scalable CI/CD pipelines using GitHub Actions, CodePipeline, or other tools for multiple teams and services. Leading incident management and root cause analysis with thorough documentation and remediation planning. Collaborating with engineering, security, and product teams to integrate DevSecOps best practices. Monitoring infrastructure performance and proactively identifying areas for cost optimization, resilience, and scalability. Coaching and mentoring junior and mid-level DevOps engineers, conducting code and design reviews, and establishing best practices. Staying current with the latest industry trends and tools, and recommending adoption where relevant. Qualifications Required Qualifications: 6+ years of relevant experience in DevOps, SRE, or Cloud Infrastructure roles. Proven expertise in AWS (multi-account architecture, IAM strategy, ECS/Fargate, EC2, VPC, ALB/NLB, RDS, S3, CloudWatch, Secrets Manager, etc.). Advanced proficiency in Terraform, including custom providers, modules, remote state, and secure secret management. Strong experience in building multi-stage CI/CD pipelines, release workflows, and integration with testing and security gates. Proficiency in containerization (Docker), and orchestration (ECS, EKS, or Kubernetes). Excellent scripting and automation skills using Python, Bash, or Go. Knowledge of observability tooling (CloudWatch, Prometheus, New Relic, Grafana, Sentry.io) and proactive alerting mechanisms. Strong grasp of networking fundamentals, security best practices, IAM policies, and cloud compliance standards. Experience with infrastructure testing and compliance-as-code using tools like Terratest, InSpec, or tfsec. Familiarity with Zero Trust, least privilege, and modern cloud security models. Exceptional troubleshooting skills across systems, applications, and networks. Ability to document architecture, decisions, and playbooks clearly and concisely. Experience mentoring and technically leading other DevOps team members. Bachelor’s or Master’s degree in Computer Science or a related field. Preferred Qualifications: Additional scripting experience is a strong plus. AWS certifications (e.g., Solutions Architect, DevOps Engineer – Professional) Experience in a regulated environment (e.g., SOC2, HIPAA, ISO 27001) Knowledge of service mesh, API gateways, or hybrid/multi-cloud deployments Contributions to open-source DevOps/IaC tools or internal reusable libraries

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies