Jobs
Interviews

638 Eks Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a APIGEE Administrator to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Role: APIGEE Administrator Responsibilities - 1. Designing and developing API proxies, implementing security policies (e.g., OAuth, JWT), and creating API product bundles. 2. Support users and administer Apigee OPDK. Integrating APIs with various systems and backend services. 3. Participate and contribute to the migration to Apigee X. Planning and executing API migrations between different Apigee environments 4. Automation of platform processes 5. Implementing security measures like authentication, authorization, mitigation, as well as managing traffic and performance optimization. 6. On-call support - Identifying and resolving API-related issues, providing support to developers and consumers, and ensuring high availability. 7. Implement architecture, including tests/CICD/monitoring/alerting/resilience/SLAs/Documentation 8. Collaborating with development teams, product owners, and other stakeholders to ensure seamless API integration and adoption Requirement - 1. Bachelor's degree (Computer Science/Information Technology/Electronics & Communication/ Information Science/Telecommunications) 2. 7+ years of work experience in IT Industry and strong knowledge in implementing/designing solutions using s/w application technologies 3. Good knowledge and experience of the Apigee OPDK platform and troubleshooting 4. Experience in AWS administration (EC2, Route53, Cloudtrail AWS WAF, Cloudwatch, EKS, AWS System Manager) 5. Good hands on experience in Redhat Linux administration and Shell scripting programming 6. Strong understanding of API design principles and best practices. 7. Kubernetes Admin, Github Cassandra Admin, Google Cloud 8. Familiar in managing Dynatrace Desirable . Jenkins . Proxy API Development . Kafka administration based on SASS (Confluent) . Knowledge of Azure . ELK About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.

Posted 1 month ago

Apply

10.0 - 12.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

FICO (NYSE: FICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity VP of Engineering. What You'll Contribute Secure the design of next next-generation FICO Platform, its capabilities, and services. Provide full-stack security architecture design from cloud infrastructure to application features for FICO customers. Work closely with product managers, architects, and developers on implementing the security controls within products. Develop and maintain Kyverno policies for enforcing security controls in Kubernetes environments. Collaborate with platform, DevOps, and application teams to define and implement policy-as-code best practices. Contribute to automation efforts for policy deployment, validation, and reporting. Stay current with emerging threats, Kubernetes security features, and cloud-native security tools. Define required controls and capabilities for the protection of FICO products and environments. Build & validate declarative threat models in a continuous and automated manner. Prepare the product for compliance attestations and ensure adherence to best security practices. What We're Seeking 10+ years of experience in architecture, security reviews, and requirement definition for complex product environments. Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper (optional but a plus). Familiarity with industry regulations, frameworks, and practices. For example, PCI, ISO 27001, NIST, etc. Experience in threat modeling, code reviews, security testing, vulnerability detection, attacker exploit techniques, and methods for their remediation. Hands-on experience with programming languages, such as Java, Python, etc. Experience in deploying services and securing cloud environments, preferably AWS Experience deploying and securing containers, container orchestration, and mesh technologies (such as EKS, K8S, ISTIO). Experience with Crossplane to manage cloud infrastructure declaratively via Kubernetes. Certifications in Kubernetes or cloud security (e.g., CKA, CKAD, CISSP) are desirable Proficiency with CI/CD tools (e.g., GitHub Actions, GitLab CI, Jenkins, Crossplane, ). Independently drive transformational security projects across teams and organizations. Experience with securing event streaming platforms like Kafka or Pulsar. Experience with ML/AI model security and adversarial techniques within the analytics domains. Hands-on experience with IaC (Such as Terraform, Cloudformation, Helm) and with CI/CD pipelines (such as Github, Jenkins, JFrog). Our Offer to You An inclusive culture strongly reflecting our core values: Act Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today - Big Data analytics. You'll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide: . Credit Scoring - FICO Scores are used by 90 of the top 100 US lenders. . Fraud Detection and Security - 4 billion payment cards globally are protected by FICO fraud systems. . Lending - 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICO's solutions, placing us among the world's top 100 software companies by revenue. We help many of the world's largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people - just like you - who thrive on the collaboration and innovation that's nurtured by a diverse and inclusive environment. We'll provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and we're proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they don't meet all stated qualifications. While our qualifications are clearly related to role success, each candidate's profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to the FICO Privacy policy at

Posted 1 month ago

Apply

0.0 - 1.0 years

10 - 14 Lacs

Pune

Work from Office

Role AWS Cloud Engineer. We are looking for an AWS Cloud Engineer with a strong DevOps and scripting background who can support and optimize cloud infrastructure, automate deployments, and collaborate cross-functionally in a fast-paced fintech environment. Core Responsibilities. Design and implement secure, scalable network and infrastructure solutions using AWS. Deploy applications using EC2, Lambda, Fargate, ECS, and ECR. Automate infrastructure using CloudFormation, scripting (Python/Bash), and AWS SDK. Manage and optimize relational (PostgreSQL, SQL) and NoSQL (MongoDB) databases. Set up and maintain monitoring using Grafana, Prometheus, and AWS CloudWatch. Perform cost optimization across the AWS infrastructure and execute savings strategies. Maintain high availability, security, and disaster recovery (DR) mechanisms. Implement Kubernetes (EKS), containers, and CI/CD pipelines. Proactively monitor system performance and troubleshoot production issues. Coordinate across product and development teams for deployments and DevOps planning. Must-Have Skills. 4+ years of hands-on experience with AWS Cloud Platform. Strong proficiency in Linux/Unix systems, Nginx, Apache, and Tomcat. Proficient in Python, Shell/Bash scripting for automation. Strong knowledge of SQL, PostgreSQL, and MongoDB. Familiarity with CloudFormation, IAM, VPC, S3, ECS/EKS/ECR. Monitoring experience with Prometheus, Grafana, and CloudWatch. Previous exposure to AWS cost optimization strategies. Excellent communication skills, self-driven, and a proactive attitude. Nice-to-Have Skills. Experience with Google Cloud Platform (GCP). Experience in container orchestration with Kubernetes (EKS preferred). Background in working with startups or fast-growing product environments. Knowledge of disaster recovery strategies and high availability setups. (ref:hirist.tech).

Posted 1 month ago

Apply

2.0 - 7.0 years

13 - 17 Lacs

Chennai

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 1+ year of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field. 2+ years of academic or work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 1 month ago

Apply

10.0 - 15.0 years

14 - 19 Lacs

Hyderabad

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: Job Summary: Qualcomm is seeking a seasoned Staff Engineer, DevOps to join our central software engineering team. In this role, you will lead the design, development, and deployment of scalable cloud-native and hybrid infrastructure solutions, modernize legacy systems, and drive DevOps best practices across products. This is a hands-on architectural role ideal for someone who thrives in a fast-paced, innovation-driven environment and is passionate about building resilient, secure, and efficient platforms. Key Responsibilities: Architect and implement enterprise-grade AWS cloud solutions for Qualcomm’s software platforms. Design and implement CI/CD pipelines using Jenkins, GitHub Actions, and Terraform to enable rapid and reliable software delivery. Develop reusable Terraform modules and automation scripts to support scalable infrastructure provisioning. Drive observability initiatives using Prometheus, Grafana, Fluentd, OpenTelemetry, and Splunk to ensure system reliability and performance. Collaborate with software development teams to embed DevOps practices into the SDLC and ensure seamless deployment and operations. Provide mentorship and technical leadership to junior engineers and cross-functional teams. Manage hybrid environments, including on-prem infrastructure and Kubernetes workloads supporting both Linux and Windows. Lead incident response, root cause analysis, and continuous improvement of SLIs for mission-critical systems. Drive toil reduction and automation using scripting or programming languages such as PowerShell, Bash, Python, or Go. Independently drive and implement DevOps/cloud initiatives in collaboration with key stakeholders. Understand software development designs and compilation/deployment flows for .NET, Angular, and Java-based applications to align infrastructure and CI/CD strategies with application architecture. Required Qualifications: 10+ years of experience in IT or software development, with at least 5 years in cloud architecture and DevOps roles. Strong foundational knowledge of infrastructure components such as networking, servers, operating systems, DNS, Active Directory, and LDAP. Deep expertise in AWS services including EKS, RDS, MSK, CloudFront, S3, and OpenSearch. Hands-on experience with Kubernetes, Docker, containerd, and microservices orchestration. Proficiency in Infrastructure as Code using Terraform and configuration management tools like Ansible and Chef. Experience with observability tools and telemetry pipelines (Grafana, Prometheus, Fluentd, OpenTelemetry, Splunk). Experience with agent-based monitoring tools such as SCOM and Datadog. Solid scripting skills in Python, Bash, and PowerShell. Familiarity with enterprise-grade web services (IIS, Apache, Nginx) and load balancing solutions. Excellent communication and leadership skills with experience mentoring and collaborating across teams. Preferred Qualifications: Experience with api gateway solutions for API security and management. Knowledge on RDBMS, preferably MSSQL/Postgresql is good to have. Proficiency in SRE principles including SLIs, SLOs, SLAs, error budgets, chaos engineering, and toil reduction. Experience in core software development (e.g., Java, .NET). Exposure to Azure cloud and hybrid cloud strategies. Bachelor’s degree in Computer Science or a related field Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc.

Posted 1 month ago

Apply

1.0 - 5.0 years

12 - 16 Lacs

Chennai

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 17 Lacs

Chennai

Work from Office

Job Area: Engineering Group, Engineering Group > Software Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications: Bachelor's degree in Engineering, Information Systems, Computer Science, or related field and 4+ years of Software Engineering or related work experience. OR Master's degree in Engineering, Information Systems, Computer Science, or related field and 3+ years of Software Engineering or related work experience. OR PhD in Engineering, Information Systems, Computer Science, or related field and 2+ years of Software Engineering or related work experience. 2+ years of work experience with Programming Language such as C, C++, Java, Python, etc. Job Title: MLOps Engineer - ML Platform Hiring Title: Flexible based on candidate experience – about Staff Engineer preferred : We are seeking a highly skilled and experienced MLOps Engineer to join our team and contribute to the development and maintenance of our ML platform both on premises and AWS Cloud. As a MLOps Engineer, you will be responsible for architecting, deploying, and optimizing the ML & Data platform that supports training of Machine Learning Models using NVIDIA DGX clusters and the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus, and Grafana. Your expertise in AWS services such as EKS, EC2, VPC, IAM, S3, and EFS will be crucial in ensuring the smooth operation and scalability of our ML infrastructure. You will work closely with cross-functional teams, including data scientists, software engineers, and infrastructure specialists, to ensure the smooth operation and scalability of our ML infrastructure. Your expertise in MLOps, DevOps, and knowledge of GPU clusters will be vital in enabling efficient training and deployment of ML models. Responsibilities will include: Architect, develop, and maintain the ML platform to support training and inference of ML models. Design and implement scalable and reliable infrastructure solutions for NVIDIA clusters both on premises and AWS Cloud. Collaborate with data scientists and software engineers to define requirements and ensure seamless integration of ML and Data workflows into the platform. Optimize the platform’s performance and scalability, considering factors such as GPU resource utilization, data ingestion, model training, and deployment. Monitor and troubleshoot system performance, identifying and resolving issues to ensure the availability and reliability of the ML platform. Implement and maintain CI/CD pipelines for automated model training, evaluation, and deployment using technologies like ArgoCD and Argo Workflow. Implement and maintain monitoring stack using Prometheus and Grafana to ensure the health and performance of the platform. Manage AWS services including EKS, EC2, VPC, IAM, S3, and EFS to support the platform. Implement logging and monitoring solutions using AWS CloudWatch and other relevant tools. Stay updated with the latest advancements in MLOps, distributed computing, and GPU acceleration technologies, and proactively propose improvements to enhance the ML platform. What are we looking for: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience as an MLOps Engineer or similar role, with a focus on large-scale ML and/or Data infrastructure and GPU clusters. Strong expertise in configuring and optimizing NVIDIA DGX clusters for deep learning workloads. Proficient in using the Kubernetes platform, including technologies like Helm, ArgoCD, Argo Workflow, Prometheus , and Grafana . Solid programming skills in languages like Python, Go and experience with relevant ML frameworks (e.g., TensorFlow, PyTorch ). In-depth understanding of distributed computing, parallel computing, and GPU acceleration techniques. Familiarity with containerization technologies such as Docker and orchestration tools. Experience with CI/CD pipelines and automation tools for ML workflows (e.g., Jenkins, GitHub, ArgoCD). Experience with AWS services such as EKS , EC2, VPC, IAM, S3, and EFS. Experience with AWS logging and monitoring tools. Strong problem-solving skills and the ability to troubleshoot complex technical issues. Excellent communication and collaboration skills to work effectively within a cross-functional team. We would love to see: Experience with training and deploying models. Knowledge of ML model optimization techniques and memory management on GPUs. Familiarity with ML-specific data storage and retrieval systems. Understanding of security and compliance requirements in ML infrastructure.

Posted 1 month ago

Apply

4.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr. DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we’d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and ‘fixes’ Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications B.Tech in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 7 -10 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit

Posted 1 month ago

Apply

5.0 - 8.0 years

1 - 5 Lacs

Bengaluru

Work from Office

At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr.DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we’d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and ‘fixes’ Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications B.Tech in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5-8 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr. DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we’d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications BSc in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and ‘fixes’ Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors

Posted 1 month ago

Apply

5.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Work from Office

At Kotak Mahindra Bank, customer experience is at the forefront of everything we do on Digital Platform. To help us build & run platform for Digital Applications , we are now looking for an experienced Sr.DevOps Engineer . They will be responsible for deploying product updates, identifying production issues and implementing integrations that meet our customers' needs. If you have a solid background in software engineering and are familiar with AWS EKS, ISTIO/Services Mesh/tetrate, Terraform,Helm Charts, KONG API Gateway, Azure DevOps, SpringBoot , Ansible, Kafka/MOngoDB we’d love to speak with you. Objectives of this Role Building and setting up new development tools and infrastructure Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Testing and examining code written by others and analyzing results Identifying technical problems and developing software updates and ‘fixes’ Working with software developers and software engineers to ensure that development follows established processes and works as intended Monitoring the systems and setup required Tools Daily and Monthly Responsibilities Deploy updates and fixes Provide Level 3 technical support Build tools to reduce occurrences of errors and improve customer experience Develop software to integrate with internal back-end systems Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualization Design procedures for system troubleshooting and maintenance Skills and Qualifications B.Tech in Computer Science, Engineering or relevant field Experience as a DevOps Engineer or similar software engineering role minimum 5-8 Yrs Proficient with git and git workflows Good knowledge of Kubernets EKS,Teraform,CICD ,AWS Problem-solving attitude Collaborative team spirit

Posted 1 month ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microservices and Light Weight Architecture Good to have skills : NAMinimum 15 year(s) of experience is required Educational Qualification : 15 years full time educationModernization Lead:Lead modernization initiatives by re-architecting legacy systems using Java, applying modern software design principles and AWS-based architecture patterns.Drive end-to-end modernization efforts, including re-architecture, refactoring of legacy systems, and cloud migration strategies.Provide architectural guidance and mentorship to engineering teams, fostering best practices in code quality, design, testing, and deployment.Apply Domain-Driven Design (DDD) principles to structure systems aligned with core business domains, ensuring modular and maintainable solutions.Design and implement scalable, decoupled services leveraging AWS services such as EKS, Lambda, API Gateway, SQS/SNS, and Oralce/RDS.Drive system decomposition, refactoring, and migration planning with a clear understanding of system interdependencies and data flows.Promote infrastructure-as-code, CI/CD automation, and observability practices to ensure system reliability, performance, and operational readiness.Proficient in architecting applications with Java and AWS technology stack, microservices, containers Qualification 15 years full time education

Posted 1 month ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Mumbai

Work from Office

We are looking for an experienced DevOps Engineer (Level 2 & 3) to design, automate, and optimize cloud infrastructure. You will play a key role in CI/CD automation, cloud management, observability, and security, ensuring scalable and reliable systems. Key Responsibilities : Design and manage AWS environments using Terraform/Ansible. Build and optimize deployment pipelines (Jenkins, ArgoCD, AWS CodePipeline). Deploy and maintain EKS, ECS clusters. Implement OpenTelemetry, Prometheus, Grafana for logs, metrics, and tracing. Manage and scale cloud-native microservices efficiently. Required Skills : Proven experience in DevOps, system administration, or software development. Strong knowledge of AWS. Programming languages: Python, Go, Bash, are good to have Experience with IAC tools like Terraform, Ansible Solid understanding of CI/CD tools (Jenkins, ArgoCD , AWS CodePipeline). Experience in containers and orchestration tools like Kubernetes (EKS) Understanding of OpenTelemetry observability stack (logs, metrics, traces) Good to have : Experience with container orchestration platforms (e.g., EKS, ECS). Familiarity with serverless architecture and tools (e.g., AWS Lambda). Experience using monitoring tools like DataDog/ NewRelic, CloudWatch, Prometheus/Grafana Experience with managing more than 20+ cloud-native microservices. Previous experience of working in a startup Education & Experience : Bachelors degree in Computer Science, Information Technology, or a related field (or equivalent work experience). Years of relevant experience in DevOps or a similar role. About Kissht: Kissht, a Great Place to Work certified organization, is a consumer-first credit app that is transforming the landscape of consumer credit. As one of the fastest-growing and most respected FinTech companies, Kissht is a pioneer in data and machine-based lending. With over 15 million customers, including 40% from tier 2 cities and beyond, we offer both short and long-term loans for personal consumption, business needs, and recurring expenses. Founded by Ranvir and Krishnan, alumni of IIT and IIM, and backed by renowned investors like Endiya Partners, the Brunei Investment Authority, and the Singapore Government, Kissht is synonymous with excellence in the industry. Join us and be a part of a dynamic, innovative company that is changing the future of financial technology.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Hyderabad

Work from Office

We are seeking a Senior DevOps Engineer with 35 years of hands-on experience in cloud infrastructure and DevOps practices. The role involves designing, implementing, and maintaining AWS cloud infrastructure, managing containerized applications using Amazon EKS and Kubernetes, and developing CI/CD pipelines with Jenkins, Azure DevOps, and Argo CD. The ideal candidate will have expertise in Infrastructure as Code (IaC) tools such as Terraform or CloudFormation, strong scripting skills (e.g., Python, Bash), and a deep understanding of AWS services like EC2, S3, and RDS. Candidates with experience in Financial Services Industry (FSI) or regulated environments are preferred. This is a full-time, 6-month on-site role in Bengaluru, with a requirement for immediate joiners or a notice period of 15 days.

Posted 1 month ago

Apply

4.0 - 5.0 years

10 - 20 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled and experienced DevOps Engineer to join our dynamic team for a 6-month contract. The ideal candidate will focus on infrastructure enhancement, containerization, and collaborative deployment while ensuring system health and mentoring team members. Responsibilities include leading the development and troubleshooting of infrastructure, driving containerization with Kubernetes, EKS, and GKE, monitoring system health, and deploying infrastructure on private cloud platforms. Expertise in Linux systems, CI/CD pipelines, and scripting/programming is essential, along with proficiency in tools like Terraform, Ansible, and Splunk/ELK. A passion for continuous learning, excellent communication skills, and the ability to work in a collaborative environment are critical.

Posted 1 month ago

Apply

7.0 - 10.0 years

9 - 12 Lacs

Hyderabad

Work from Office

We are seeking a Lead Software Engineer specializing in Java and Spring frameworks. The successful candidate will develop and maintain Java-based applications, manage Gitlab CI/CD pipelines, handle AWS (EKS) deployments, perform code reviews, mentor junior engineers, and devise architectural designs for smaller features.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 11 Lacs

Chennai

Work from Office

Cloud Migration Specialist Chennai Rates including mark up - 170K/ No Of position - 3 Mandatory skills -AWS and Azure Cloud migrations, on-prem applications to AWS/Azure Experience - 5 - 8 years Position Overview: We are seeking a skilled Cloud Engineer with expertise in AWS and Azure Cloud migrations. The ideal candidate will lead the migration of on-premises applications to AWS/Azure, optimize cloud infrastructure, and ensure seamless transitions. Key Responsibilities: Plan and execute migrations of on-prem applications to AWS/Azure. Utilize or Develop migration tools for large-scale application migrations. Design and implement automated application migrations. Collaborate with cross-functional teams to troubleshoot and resolve migration issues. Qualifications: 5+ years of AWS/Azure cloud migration experience. Proficiency in Cloud compute (EC2, EKS, Azure VM, AKS) and Storage (s3, EBS,EFS, Azure Blob, Azure Managed Disks, Azure Files). Strong knowledge of AWS and Azure cloud services and migration tools. Expert in terraform. AWS/Azure certification preferred. Team Management Resourcing Forecast talent requirements as per the current and future business needs Hire adequate and right resources for the team Train direct reportees to make right recruitment and selection decisions Talent Management Ensure 100% compliance to Wipros standards of adequate onboarding and training for team members to enhance capability & effectiveness Build an internal talent pool of HiPos and ensure their career progression within the organization Promote diversity in leadership positions Performance Management Set goals for direct reportees, conduct timely performance reviews and appraisals, and give constructive feedback to direct reports. Ensure that organizational programs like Performance Nxt are well understood and that the team is taking the opportunities presented by such programs to their and their levels below Employee Satisfaction and Engagement Lead and drive engagement initiatives for the team Track team satisfaction scores and identify initiatives to build engagement within the team Proactively challenge the team with larger and enriching projects/ initiatives for the organization or team Exercise employee recognition and appreciation Deliver NoPerformance ParameterMeasure1Operations of the towerSLA adherence Knowledge management CSAT/ Customer Experience Identification of risk issues and mitigation plans Knowledge management2New projectsTimely delivery Avoid unauthorised changes No formal escalations Mandatory Skills: Cloud Azure Admin. Experience5-8 Years.

Posted 1 month ago

Apply

12.0 - 17.0 years

10 - 14 Lacs

Pune

Work from Office

BMC is looking for a Java Tech Lead, an innovator at heart, to join a team of highly skilled software developers, responsible for BMCs Helix Capacity Optimization product. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Play a vital role in project design to ensure scalability, reliability, and performance are met Design and develop new features as well as maintain existing features by adding improvements and fixing defects in complex areas (using Java) Design and maintain robust AI/ML pipelines using Python and industry-standard ML frameworks . Collaborate closely with AI researchers and data scientists to implement, test, and deploy machine learning models in production. Leverage prompt engineering techniques and implement LLM optimization strategies to enhance response quality and performance. Assist in troubleshooting complex technical problems in development and production Implement methodologies, processes & tools Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset & experience: Backend Development: FastAPI, RESTful APIs, Python Cloud Infrastructure: AWS, EKS, Docker, Kubernetes AI/ML Frameworks: LangChain, Scikit-learn, Bedrock, Hugging Face (optional to add if applicable) ML Pipelines: Python, Pandas, NumPy, joblib DevOps & CI/CD: Git, Terraform (optional), Helm, GitHub Actions LLM Expertise: Prompt engineering, RAG (Retrieval Augmented Generation), vector databases (e.g., FAISS, Pinecone) You have 12+ years of experience in Java Backend development You have experience as a Backend Tech Lead You have experience in Spring, Swagger, REST API You worked with Spring Boot, Docker, Kubernetes You are a self-learner whos passionate about problem solving and technology You are a team player with good communication skills in English (verbal and written) Whilst these are nice to have, our team can help you develop in the following skills: Public Cloud (AWS, Azure, GCP) Python, Node.js, C/C++ Automation Frameworks such as Robot Framework

Posted 1 month ago

Apply

3.0 - 9.0 years

3 - 9 Lacs

Hyderabad, Telangana, India

On-site

Automating repetitive IT tasks - Collaborate with cross-functional teams to gather requirements and build automation solutions for infrastructure provisioning, configuration management, and software deployment. Configuration Management - Design, implement, and maintain code including Ansible playbooks, roles, and inventories for automating system configurations and deployments and ensuring consistency Ensure the scalability, reliability, and security of automated solutions. Troubleshoot and resolve issues related to automation scripts, infrastructure, and deployments. Perform infrastructure automation assessments, implementations, providing solutions to increase efficiency, repeatability, and consistency. DevOps - Facilite continuous integration and deployment (CI/CD) Orchestration - Coordinating multiple automated tasks across systems Develop and maintain clear, reusable, and version-controlled playbooks and scripts. Manage and optimize cloud infrastructure using Ansible and terraform automation (AWS, Azure, GCP, etc.). Continuously improve automation workflows and practices to enhance speed, quality, and reliability. Ensure that infrastructure automation adheres to best practices, security standards, and regulatory requirements. Document and maintain processes, configurations, and changes in the automation infrastructure. Participate in design review, client requirements sessions and development teams to deliver features and capabilities supporting automation initiatives Collaborate with product owners, partners, testers and other developers to understand, estimate, prioritize and implement solutions Design, code, debug, document, deploy and maintain solutions in a highly efficient and effective manner Participate in problem analysis, code review, and system design Remain current on new technology and apply innovation to improve functionality Collaborate closely with partners and team members to configure, improve and maintain current applications Work directly with users to resolve support issues within product team responsibilities Monitor health, performance and usage of developed solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelor s degree and 3 to 5 years of computer science, IT, or related field experience OR Diploma and 7 to 9 years of computer science, IT, or related field experience Deep hands-on experience with Ansible including playbooks, roles, and modules Proven experience as an Ansible Engineer or in a similar automation role Scripting skills in Python, Bash, or other programming languages Proficiency expertise in Terraform & CloudFormation for AWS infrastructure automation Experience with other configuration management tools (e.g., Puppet, Chef). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (GitHub Actions, CodePipeline, etc.) Familiarity with monitoring tools (e.g., Dynatrace, Prometheus, Nagios) Working in an Agile (SAFe, Scrum, and Kanban) environment Preferred Qualifications: Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Professional Certifications (preferred): Red Hat Certified Specialist in Developing with Ansible Automation Platform Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform Red Hat Certified System Administrator AWS Certified Solutions Architect - Associate or Professional AWS Certified DevOps Engineer - Professional Terraform Associate Certification Soft Skills: Strong analytical and problem-solving skills. Effective communication and collaboration with cross-functional teams. Ability to work in a fast-paced, cloud-first environment.

Posted 1 month ago

Apply

8.0 - 12.0 years

30 - 45 Lacs

Hyderabad

Work from Office

Responsibilities: Design, implement, and maintain scalable cloud infrastructure primarily on AWS, with some exposure to Azure. Manage and optimize CI/CD pipelines using Jenkins and Git-based version control systems (GitHub/GitLab). Build and maintain containerized applications using Docker, Kubernetes (including AWS EKS), and Helm. Automate infrastructure provisioning and configuration using Terraform and Ansible. Implement GitOps-style deployment processes using ArgoCD and similar tools. Ensure observability through monitoring and logging with Prometheus, Grafana, Datadog, Splunk, and Kibana. Develop automation scripts using Python, Shell, and GoLang Implement and enforce security best practices in CI/CD pipelines and container orchestration environments using tools like Trivy, OWASP, SonarQube, Aqua Security, Cosign, and HashiCorp Vault. Support blue/green deployments and other advanced deployment strategies. Required Qualifications: 8-12 years of professional experience in a DevOps, SRE, or related role. Strong hands-on experience with AWS (EC2, S3, IAM, EKS, RDS, Lambda, Secrets Manager). Solid experience with CI/CD tools (Jenkins, GitHub/GitLab, Maven). Proficient with containerization and orchestration tools: Docker, Kubernetes, Helm. Experience with Infrastructure as Code tools: Terraform and Ansible. Proficiency in scripting languages: Python, Shell; GoLang Strong understanding of observability, monitoring, and logging frameworks. Familiarity with security practices and tools integrated into DevOps workflows. Excellent problem-solving and troubleshooting skills. Certifications (good to have): AWS Certified DevOps Engineer Certified Kubernetes Administrator (CKA) Azure Administrator/Developer Certifications

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 15 Lacs

Bengaluru

Hybrid

Job Description We are seeking a skilled and proactive AWS DevOps Engineer to join our growing team. You will be responsible for managing scalable infrastructure, automating deployments, monitoring environments, and ensuring optimal performance and security across cloud-based systems. If you're passionate about automation, cloud technologies, and system reliability wed love to hear from you! Key Responsibilities Design, manage, and optimize AWS infrastructure components (EC2, S3, RDS, IAM, VPC, Lambda, etc.). Develop and maintain automation scripts using Bash , Python , or PowerShell for operations, deployments, and monitoring. Implement monitoring and alerting systems using CloudWatch , Datadog , Prometheus , or similar tools. Automate infrastructure provisioning through Infrastructure as Code (IaC) tools like Terraform , CloudFormation , or AWS CDK . Enforce security best practices (IAM policies, encryption, logging, patch management). Manage incident response, conduct root cause analysis, and resolve production issues efficiently. Support and enhance CI/CD pipelines using tools like Jenkins , AWS CodePipeline , GitHub Actions , etc. Monitor and optimize cost, performance, and resource utilization across environments. Ensure robust backup and disaster recovery strategies for cloud workloads. Participate in on-call rotations and respond to high-priority alerts when necessary. Nice to Have AWS Certifications : AWS Certified SysOps Administrator or Solutions Architect. Experience with Kubernetes , ECS , or EKS . Familiarity with Ansible , Chef , or other configuration management tools. Exposure to multi-cloud or hybrid-cloud environments. Experience working in regulated environments (e.g., healthcare, finance, government). Why Join Us? Opportunity to work with a high-performing, collaborative DevOps team. Exposure to cutting-edge cloud technologies. Dynamic work culture with a strong emphasis on innovation and continuous learning. Interested candidates can apply here or send your resume to srinivas.appana@relevancelab.com

Posted 1 month ago

Apply

3.0 - 5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS In 2009, VOIS started operating in India and now has established global delivery centers in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, VOIS India supports global markets and group functions of Vodafone and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. About VOIS India VOIS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group's partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VOIS has evolved into a global, multi-functional organization, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone Job Description Job Title: Kubernetes Cloud DevOps Engineer Role Purpose The Leadership Hiring Recruiter is responsible for leading all leadership hiring activities in India/ Egypt/ Romania/ Hungary/UK. To develop, build, implement and operate 24x7 Public Cloud infrastructure services mainly into the GCP and technology solutions for internal Vodafone applications and customers. To design, plan and implement a growing set of public cloud platforms and solutions used to provide mission-critical infrastructure services to Vodafone internal customers. To constantly analyze, optimize, migrate and transform the global Vodafone legacy IT infrastructure environment into cloud ready & cloud native solutions and responsible for providing software-related operations support, including managing level two and level three incident and problem management. Roles and Responsibilities Manage project-driven integration and day-to-day administration of cloud solutions. Understand the Helm chart provided by the vendor or customer. Modify the Helm chart based on the requirement and the environment (Dev, test, and prod). Application deployment on Kubernetes using Helm Chart or Manifest files along with CD tools like ArgoCD and GOCD or Jenkins. Troubleshoot application deployment issues. Build Docker files, Container Images for frontend and backend applications. Solutioning on DevOps Domain. Core Competencies, Knowledge, and Experience Profound Cloud Technology, Network, Security, and Platform Expertise (AWS). Excellent working experience on DevOps Tools: Git, Terraform, EKS (Elastic Kubernetes Services). Excellent knowledge of Helm Chart, application deployment using Helm Chart and Manifest. Excellent knowledge of Docker, Docker files, Source to Image and Continuous Integration using CI tools. Good understanding of AWS cloud services like VPC, EC2, ECS, S3, EBS, Glacier, ELB, Elastic IPs. Must-Have Qualifications Adapt in ITIL, SOX and security regulations: Proficiency in ITIL (Information Technology Infrastructure Library), SOX (Sarbanes-Oxley Act) compliance, and various security regulations is essential. Three to five years of work experience: Demonstrated experience in programming and/or systems analysis with practical application of agile frameworks. Experience with Web applications and Web hosting: Proven skills in developing and hosting Web applications. DevOps in cloud environment: Familiarity with DevOps concepts in a cloud-based setting. Managing critical environments: Hands-on experience in managing environments that are highly critical to business operations. GCP Cloud Engineer / GCP Professional Cloud Architect certification: Preferred certification along with relevant experience. Telecommunications industry experience: Professional experience and knowledge in the Telecommunications sector is preferred. VOIS Equal Opportunity Employer Commitment India: VOIS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion, Top 10 Best Workplaces for Women, Top 25 Best Workplaces in IT & IT-BPM and 10th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we'll be in touch!

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of L ead Consultant - C loud Engineer! In this role, you will be responsible for designing, provisioning, and securing scalable cloud infrastructure to support AI/ML and Generative AI workloads. A key focus will be ensuring high availability, cost efficiency, and performance optimization of infrastructure through best practices in architecture and automation. Responsibilities Design and implement secure VPC architecture, subnets, NAT gateways, and route tables. Build and maintain IAC modules for repeatable infrastructure provisioning. Build CI/CD pipelines that support secure, auto-scalable AI deployments using GitHub Actions, AWS CodePipeline , and Lambda triggers. Monitor and tune infrastructure health using AWS CloudWatch, GuardDuty , and custom alerting. Track and optimize cloud spend using AWS Cost Explorer, Trusted Advisor, and usage dashboards. Deploy and manage cloud-native services including SageMaker, Lambda, ECR, API Gateway etc. Implement IAM policies, Secrets Manager, and KMS encryption for secure deployments. Enable logging and monitoring using CloudWatch and configure alerts and dashboards. Set up and manage CloudTrail, GuardDuty , and AWS Config for audit and security compliance. Assist with cost optimization strategies including usage analysis and budget alerting. Support multi-cloud or hybrid integration patterns (e.g., data exchange between AWS and Azure/GCP). Collaborate with MLOps and Data Science teams to translate ML/ GenAI requirements into production-grade, resilient AWS environments. Maintain multi-cloud compatibility as needed (e.g., data egress readiness, common abstraction layers). Be engaging in the design, development and maintenance of data pipelines for various AI use cases Required to actively contribution to key deliverables as part of an agile development team Be collaborating with others to source, analyse, test and deploy data processes. Qualifications we seek in you! Minimum Qualifications AWS infrastructure experience in production environments. Degree/qualification in Computer Science or a related field, or equivalent work experience Proficiency in Terraform, AWS CLI, and Python or Bash scripting. Strong knowledge of IAM, VPC, ECS/EKS, Lambda, and serverless computing. Experience supporting AI/ML or GenAI pipelines in AWS (especially for compute and networking). Hands on experience to multiple AI / ML /RAG/LLM workloads and model deployment infrastructure. Exposure to multi-cloud architecture basics (e.g., SSO, networking, blob exchange, shared VPC setups). AWS Certified DevOps Engineer or Solutions Architect - Associate/Professional. Experience in developing, testing, and deploying data pipelines using public cloud. Clear and effective communication skills to interact with team members, stakeholders and end users Preferred Qualifications/ Skills Experience deploying infrastructure in both AWS and another major cloud provider (Azure or GCP). Familiarity with multi-cloud tools (e.g., HashiCorp Vault, Kubernetes with cross-cloud clusters). Strong understanding of DevSecOps best practices and compliance requirements. Exposure to RAG/LLM workloads and model deployment infrastructure. Knowledge of governance and compliance policies, standards, and procedures Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

2.0 - 5.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Job Description. Responsibilities:. Design, implement, and manage cloud infrastructure using AWS services, including EC2, Lambda, API Gateway, Step Functions, EKS clusters, and Glue. Develop and maintain Infrastructure as Code (IaC) using Terraform to ensure consistent and reproducible deployments. Set up and optimize CI/CD pipelines using tools such as Azure Pipelines and AWS Pipelines to automate software delivery processes. Containerize applications using Docker and orchestrate them with Kubernetes for efficient deployment and scaling. Write and maintain Python scripts to automate tasks, improve system efficiency, and integrate various tools and services. Develop shell scripts for system administration, automation, and troubleshooting. Implement and manage monitoring and logging solutions to ensure system health and performance. Collaborate with development teams to improve application deployment processes and reduce time-to-market. Ensure high availability, scalability, and security of cloud-based systems. Troubleshoot and resolve infrastructure and application issues in production environments. Implement and maintain backup and disaster recovery solutions. Stay up-to-date with emerging technologies and industry best practices in DevOps and cloud computing. Document processes, configurations, and system architectures for knowledge sharing and compliance purposes. Mentor junior team members and contribute to the overall growth of the DevOps practice within the organization. Additional Information. At Tietoevry, we believe in the power of diversity, equity, and inclusion. We encourage applicants of all backgrounds, genders (m/f/d), and walks of life to join our team, as we believe that this fosters an inspiring workplace and fuels innovation.?Our commitment to openness, trust, and diversity is at the heart of our mission to create digital futures that benefit businesses, societies, and humanity.. Diversity,?equity and?inclusion (tietoevry.com). Show more Show less

Posted 1 month ago

Apply

2.0 - 6.0 years

10 - 15 Lacs

Bengaluru

Work from Office

About Us. At ANZ, we're applying new ways technology and data can be harnessed as we work towards a common goal: to improve the financial wellbeing and sustainability of our millions of customers.. Our community of over 5,000 engineers is key to making this happen, because technology underpins every part of our business from delivering tools, apps and services for our customers, to building a bank for the future.. About The Role. As an Integration Engineer in our Enterprise Integration Service, you’ll play a key role in delivering robust, flexible and secure integration platforms and offering integration capability as a service. Our Engineers design, build, test and support integration solutions across platforms working closely with other squad members to ensure customer outcomes. We strive for continual innovation in what we deliver and how we work.. Banking is changing and we’re changing with it, giving our people great opportunities to try new things, learn and grow. Whatever your role at ANZ, you’ll be building your future, while helping to build ours.. Role Type:Permanent. Role Location:Bengaluru. What will your day look like?. Developing new skills in software development, coding and automation. Contributing towards solutions in deployment, change management and incident management. Participates in support roster to provide on going support for platforms as required e.g. problem and incident management. Building continuous delivery practices to increase delivery speed. Participating in collaborative teams to build robust and scalable integration platforms. Utilises tools in best practices to build, verify and deploy platfaorm solutions while learning about efficiency and robustness. Be mentored, coached and advised by senior employees about how to build, assemble, code and deploy effective solutions.. Participates in a culture within the Technology Area and the Engineering Chapter encouraging collaboration, continual learning and improvement. Creates and maintains technical documentation, ensuring that published specifications accurately reflect the real world.. Participates in meetings with squad members to estimate, plan and deliver integration solutions as part of strategic and bank wide initiatives.. Learn from technical advisors.. What will you bring?. To grow and be successful in this role, you will ideally bring the following:. Background in Software/platform Engineering combined with broad capabilities and experience to innovate and adapt to the latest developments, technologies and tooling.. Writing technical documentation covering audit, disaster recovery, observability, coding standards and more. Experience on Kubernetes, Docker, Dockerfile familiarity, Helm Charts. Kong(API Gateway) implementation, testing and design. Hands on coding experience with Node.js, Python etc. Knowledge in Containerisation / Kubernetes(EKS/GKE) / Openshift. Good understanding of API design & Security principles. Micro-Service Driven Architecture Principles. Hands on experience with CI/CD automation, preferably CodeFresh/GitActions. Knowledge of testing techniques, including mocking and performance testing, and code coverage. Observability frameworks(New Relic, Dyna Trace implementation and configuration). AWS/GCP Cloud familiarity. YAML, JSON familiarity. DevOps – Infrastructure as Code Mentality. Understanding of Certificate Authorities, certificate signing requests/Networking. You’re not expected to have 100% of these skills. At ANZ a growth mindset is at the heart of our culture, so if you have most of these things in your toolbox, we’d love to hear from you.. So why join us?. ANZ is a place where big things happen as we work together to provide banking and financial services across more than 30 markets. With more than 7,500 people, our Bengaluru team is the bank's largest technology, data and operations centre outside Australia. In operation for over 33 years, the centre is critical in delivering the bank's strategy and making an impact for our millions of customers around the world. Our Bengaluru team not only drives the transformation initiatives of the bank, it also drives a culture that makes ANZ a great place to be. We're proud that people feel they can be themselves at ANZ and 90 percent of our people feel they belong.. We know our people need different things to be great in their role, so we offer a range of flexible working options, including hybrid work (where the role allows it). Our people also enjoy a range of benefits including access to health and wellbeing services.. We want to continue building a diverse workplace and welcome applications from everyone. Please talk to us about any adjustments you may require to our recruitment process or the role itself. If you are a candidate with a disability or access requirements, let us know how we can provide you with additional support.. To find out more about working at ANZ visit https://www.anz.com/careers/ . You can apply for this role by visiting ANZ Careers and searching for reference number 97556. Job Posting End Date. 13/06/2025 , 11.59pm, (Melbourne Australia). Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies