Jobs
Interviews

638 Eks Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 7.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Senior Data Engineer with a deep focus on data quality, validation frameworks, and reliability engineering . This role will be instrumental in ensuring the accuracy, integrity, and trustworthiness of data assets across our cloud-native infrastructure. The ideal candidate combines expert-level Python programming with practical experience in data pipeline engineering, API integration, and managing cloud-native workloads on AWS and Kubernetes . Roles and Responsibilities Design, develop, and deploy automated data validation and quality frameworks using Python. Build scalable and fault-tolerant data pipelines that support quality checks across data ingestion, transformation, and delivery. Integrate with REST APIs to validate and enrich datasets across distributed systems. Deploy and manage validation workflows using AWS services (EKS, EMR, EC2) and Kubernetes clusters. Collaborate with data engineers, analysts, and DevOps to embed quality checks into CI/CD and ETL pipelines. Develop monitoring and alerting systems for real-time detection of data anomalies and inconsistencies. Write clean, modular, and reusable Python code for automated testing, validation, and reporting. Lead root cause analysis for data quality incidents and design long-term solutions. Maintain detailed technical documentation of data validation strategies, test cases, and architecture. Promote data quality best practices and evangelize a culture of data reliability within the engineering teams. Required Skills: Experience with data quality platforms such as Great Expectations , Collibra Data Quality , or similar tools. Proficiency in Docker and container lifecycle management. Familiarity with serverless compute environments (e.g., AWS Lambda, Azure Functions), Python, PySpark Relevant certifications in AWS , Kubernetes , or data quality technologies . Prior experience working in big data ecosystems and real-time data environments.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

27 - 42 Lacs

Bengaluru

Work from Office

Job Summary Looking for strong Mulesoft Platform Admin with Amazon Kubernetes Services Amazon EKS & Terraform As SA level Responsibilities Roles & Responsibilities * MuleSoft any point platform administration which includes managing Business groups Environments user Access and integration applications configurations and schedule. Deploying and configuring integration applications in MuleSoft any point platform * Experience in migrating & supporting the applications on Kubernetes * Experience in migrating & supporting the applications on Kubernetes * Experience in managing the applications on aws * Strong Experience with Runtime Fabric and Hybrid Deployment Model. * Experience with Cloud infrastructure and infrastructure management * Experience with building Infrastructure in aws cloud using terraform workspaces * Understanding of distributed architecture and knowledge on High availability * Strong troubleshooting background in Kubernetes Services ngnix logs monitoring etc. * Strong understanding and experience with security implementations (e.g. SSL/mutual SSL SAML Auth). * Deep experience with Any point Platform Runtime Fabric API Management * Prepare documentation where necessary including training process flows system structure network deployment architecture etc. * Design system with the right mix of Monitoring Alerting and Tracing * Experience in Monitoring reporting and alerting tools Exposure to security compliance and vulnerabilities and applying fixes. Primary Skill: Mulesoft Amazon Kubernetes Services Secondary skill: Amazon EKS & Terraform Certifications Required Certified Mulesoft Platform Architect Temenos T24 Certification #LI-SM5

Posted 3 weeks ago

Apply

4.0 - 7.0 years

15 - 25 Lacs

Hyderabad

Work from Office

Job Summary We are seeking a Technical Lead with 4 to 7 years of experience to join our dynamic team. The ideal candidate will have expertise in Amazon EKS AWS EKS Amazon Kubernetes Services GIT Jenkins AWS Services Linux Docker and Pivotal Cloud Foundry. This hybrid role involves rotational shifts and does not require travel. Responsibilities Lead the design and implementation of scalable and reliable cloud-based solutions using Amazon EKS and AWS EKS. Oversee the deployment and management of Kubernetes clusters to ensure high availability and performance. Provide technical guidance and support to the development team in using GIT and Jenkins for continuous integration and continuous deployment (CI/CD) pipelines. Collaborate with cross-functional teams to ensure seamless integration of AWS services within the existing infrastructure. Monitor and optimize the performance of Linux-based systems to ensure efficient operation. Utilize Docker for containerization and management of applications ensuring consistency across development and production environments. Implement and manage Pivotal Cloud Foundry (PCF) to support the deployment and scaling of cloud-native applications. Ensure adherence to best practices in cloud security and compliance maintaining the integrity and confidentiality of data. Troubleshoot and resolve technical issues related to cloud infrastructure and services minimizing downtime and ensuring business continuity. Conduct regular performance reviews and capacity planning to anticipate and address potential bottlenecks. Develop and maintain documentation for cloud infrastructure processes and procedures to ensure knowledge sharing and continuity. Stay updated with the latest trends and advancements in cloud technologies to drive innovation and continuous improvement. Participate in rotational shifts to provide 24/7 support for critical cloud infrastructure and services. Qualifications Possess strong expertise in Amazon EKS AWS EKS and Amazon Kubernetes Services. Demonstrate proficiency in using GIT and Jenkins for CI/CD pipelines. Have extensive experience with AWS services and Linux-based systems. Show proficiency in Docker for containerization and management of applications. Have hands-on experience with Pivotal Cloud Foundry (PCF). Exhibit strong problem-solving skills and the ability to troubleshoot complex technical issues. Possess excellent communication and collaboration skills to work effectively with cross-functional teams. Demonstrate a commitment to continuous learning and staying updated with the latest cloud technologies. Have a strong understanding of cloud security best practices and compliance requirements. Show the ability to work in a hybrid work model with rotational shifts. Be detail-oriented and capable of maintaining comprehensive documentation. Exhibit a proactive approach to performance monitoring and capacity planning. Demonstrate the ability to lead and mentor junior team members. Certifications Required AWS Certified Solutions Architect Certified Kubernetes Administrator (CKA) Docker Certified Associate

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

Data Scientist (5+ Years of Experience) We are seeking a highly motivated Data Scientist with over 5 years of hands-on experience in data mining, statistical analysis, and developing high-quality machine learning models. The ideal candidate will have a passion for solving real-world problems using data-driven approaches and possess strong technical expertise across various data science domains. Key Responsibilities: Apply advanced data mining techniques and statistical analysis to extract actionable insights. Design, develop, and deploy robust machine learning models to address complex business challenges. Conduct A/B and multivariate experiments to evaluate model performance and optimize outcomes. Monitor, analyze, and enhance the performance of machine learning models post-deployment. Collaborate cross-functionally to build customer cohorts for CRM campaigns and conduct market basket analysis. Stay updated with state-of-the-art techniques in NLP, particularly within the e-commerce domain. Required Skills & Qualifications: Programming & Tools: Proficient in Python, PySpark, and SQL for data manipulation and analysis. Machine Learning & AI: Strong experience with ML libraries (e.g., Scikit-learn, TensorFlow, PyTorch) and expertise in NLP, Computer Vision, Recommender Systems, and Optimization techniques. Cloud & Big Data: Hands-on experience with AWS services, including Glue, EKS, S3, SageMaker, and Redshift. Model Deployment: Experience deploying pre-trained models from platforms like Hugging Face and AWS Bedrock. DevOps & MLOps: Understanding of Git, Docker, CI/CD pipelines, and deploying models with frameworks such as FastAPI. Advanced NLP: Experience in building, retraining, and optimizing NLP models for diverse use cases. Preferred Qualifications: Strong research mindset with a keen interest in exploring new data science methodologies. Background in e-commerce analytics is a plus. If youre passionate about leveraging data to drive impactful business decisions and thrive in a dynamic environment, wed love to hear from you!,

Posted 3 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

hyderabad, telangana

On-site

Position Overview: Platform Engineering team is looking for a Senior Backend Engineer to design, develop, and implement robust product APIs and event driven applications for Packaged Business Capabilities (PBCs). You'll be responsible for incorporating key features like rate resiliency, observability and rate limiting, to ensure our APIs are secure, perform, and monitored effectively. Responsibilities: Design, develop, and implement robust backend APIs using OpenAPI specs Integrate rate limiting, resiliency strategies, and observability practices Develop cloud-native APIs ensuring scalability, resilience and adherence to best practices Champion event-driven architecture for efficient data flow Leverage cloud engineering principles, preferably with AWS experience Utilize NoSQL databases to store and manage data efficiently Collaborate effectively with cross-functional teams (product, frontend, etc.) Qualifications Required Skills: Solid understanding of RESTful APIs and API design principles (OpenAPI) Experience implementing rate limiting, resiliency patterns, and observability techniques using Kubernetes/OpenShift/ Docker Proficiency in cloud engineering, preferably with AWS experience Strong programming skills, particularly in TypeScript and/or Golang Expertise in NoSQL databases DynamoDB/MongoDB & Caching Solutions Redis/ EKS Familiarity with event-driven architecture concepts Required Experience & Education: 1+years of experience in Technology A minimum of 0-1years of experience in backend engineering Excellent communication and collaboration skills Desired Experience: Exposure to AWS Healthcare experience including Disease Management Coaching of team members,

Posted 3 weeks ago

Apply

1.0 - 6.0 years

0 Lacs

karnataka

On-site

We are hiring for java fullstack with reactjs with top most MNC client;Exp:6yearsLocation:BangaloreNp:immediate to 10daysJD:hashtag#Musthave: Java 8/11, React JS, springboot, microservices, AWSFull Stack Senior Engineero Java 11 or higher, reactive programming, Micro Services, Spring, Spring Boot, Hibernate/JPA, REST API & WebServiceso Good knowledge of Frontend technologies - ReactJs , HTML, CSS, webpack etc.o Experience with NoSQL DBs like MongoDB etc.o Experience multi-threading, and performance tuning.o Familiarity with AWS cloud, Kubernetes, and deploying services/applications using Docker, ECS, EKS, SQS,SNS, EC2, VPC etc.o Experience working with multi-threaded programming in high performance, distributed environments.o Java performance tuning, memory management, debug heap dumps to identify memory leaks.o Experience with CI/CD principles and automated testing as well as the related processes and technologies.Interested,please share resume to prakruthi@adso.com Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Day shift Fixed shift Monday to Friday Performance bonus Yearly bonus Application Question(s): immediate joiners Education: Bachelor's (Preferred) Experience: total work: 6 years (Required) Java: 3 years (Required) Spring Boot: 2 years (Required) Microservices: 2 years (Required) aws: 1 year (Required) reactjs: 1 year (Required) Work Location: In person Expected Start Date: 08/09/2024,

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description: ability to write Kubernetes yaml file all from scratch to manage infrastructure on EKS experience with writing Jenkins pipelines for setting up new pipeline or extend existing create docker images for new applications like Java NodeJS ability to setup backups for storage services on AWS and EKS Setup Splunk log aggregation tools for all existing applications Setup Integration of our EKS Lambda Cloudwatch with Grafana Splunk etc Manage and Setup DevOps SRE tools independently for existing stack and review with the CORE engineering teams Independently manage the work stream for new features of DevOps and SRE with minimum day to day oversight of the tasks activities Deploy and leverage existing public domain helm charts for repetitive stuff and orchestration and terraform pulumi creation Site Reliability Engineer SRE Cloud Infrastructure Data Ensure reliable scalable and secure cloud based data infrastructure Design implement and maintain AWS infrastructure with a focus on data products Automate infrastructure management using Pulumi Terraform and policy as code Monitor system health optimize performance and manage Kubernetes EKS clusters Implement security measures ensure compliance and mitigate risks Collaborate with development teams on deployment and operation of data applications Optimize data pipelines for efficiency and cost effectiveness Troubleshoot issues participate in incident response and drive continuous improvement Experience with Kubernetes administration data pipelines and monitoring and observability tools In depth coding and debugging skills in Python Unix scripting Excellent communication and problem solving skills Self driven highly motivated and ability to work both independently and within a team Operate optimally in fast paced development environment with dynamic changes tight deadlines and limited resources Key Responsibilities: Setup sensible permission defaults for seamless access management for cloud resources using services like aws iam aws policy management aws kms kube rbac etc Understanding of best practices for security access management hybrid cloud etc Technical Requirements: should be able to write bash scripts for monitoring existing running infrastructure and report out should be able to extend existing IAC code in pulumi typescript ability to debug and fix kubernetes deployment failures network connectivity ingress volume issues etc with kubectl good knowledge of networking basics to debug basic networking and connectivity issues with tools like dig bash ping curl ssh etc knowledge for using monitoring tools like splunk cloudwatch kube dashboard and create dashboards and alerts when and where needed knowledge of aws vpc subnetting alb nlb egress ingress knowledge of doing disaster recovery from prepared backups for dynamodb kube volume storage keyspaces etc AWS Backup Amazon S3 Systems Manager Additional Responsibilities: Knowledge of advance kube concepts and tools like service mesh cluster mesh karpenter kustomize etc Templatise infra IAC creation with pulumi and terraform using advanced techniques for modularisation Extend existing helm charts for repetitive stuff and orchestration and write terraform pulumi creation Use complicated manual infrastructure setup with Ansible Chef etc Certifications AWS Certified Advanced Networking Specialty AWS Certified DevOps Engineer Professional DOP C02 Preferred Skills: Technology->Cloud Platform->Amazon Webservices DevOps->AWS DevOps

Posted 3 weeks ago

Apply

10.0 - 12.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Your Impact: The Lead Site Reliability Engineer (SRE) will be responsible for ensuring the availability, reliability, and scalability of cloud infrastructure and services. This role focuses on automation, performance optimization, incident response, and CI/CD pipeline management to support highly available and resilient applications. The ideal candidate will bring deep expertise in AWS, Kubernetes, GitLab CI/CD, and Infrastructure as Code (IaC). What The Role Offers: Architect, deploy, and maintain highly available and scalable cloud environments in AWS. Design and manage Kubernetes clusters (EKS) and containerized applications with Docker. Implement auto-scaling, load balancing, and fault tolerance for cloud services. Develop and optimize Infrastructure as Code (IaC) using Terraform, Tofu, or Ansible. Design, implement, and maintain CI/CD pipelines using GitLab CI/CD and ArgoCD. Automate deployment workflows, infrastructure provisioning, and release management. Ensure secure, compliant, and automated software delivery across multiple environments. Implement observability and monitoring using tools like CloudWatch, Prometheus, Grafana, ELK, or Datadog. Analyze system performance, detect anomalies, and optimize cloud resource utilization. Drive incident response and root cause analysis, ensuring fast recovery (MTTR) and minimal downtime. Establish Service Level Objectives (SLOs) and error budgets to maintain system health. Implement security best practices, including IAM policies, encryption, network security, and vulnerability scanning. Automate patch management and security updates for cloud infrastructure. Ensure compliance with industry standards and regulations (SOC2, ISO27001, HIPAA, etc.). Work closely with DevOps, security, and development teams to drive reliability best practices. Lead blameless postmortems and continuously improve operational processes. Provide mentorship and training to junior engineers on SRE principles and cloud best practices. Participate in on-call rotations, ensuring 24/7 reliability of production services. What You Need To Succeed: Bachelors degree in Computer Science, Engineering, or equivalent experience. 10-12 years of experience in Site Reliability Engineering (SRE), DevOps, or Cloud Engineering. Expertise in AWS Cloud Hands-on experience with EC2, VPC, RDS, S3, IAM, Lambda, and EKS. Strong Kubernetes knowledge Hands-on experience with EKS, Helm charts, and cluster management. CI/CD experience Proficiency in GitLab CI/CD, ArgoCD for automated software deployments. Infrastructure as Code (IaC) Experience with Terraform, Tofu Monitoring & Logging Familiarity with CloudWatch, Prometheus, Grafana, ELK, or Datadog. Scripting & Automation Proficiency in Python, Shell scripting, or Golang. Incident Management & Reliability Practices Experience with SLOs, SLIs, error budgets, and chaos engineering.

Posted 3 weeks ago

Apply

0.0 - 3.0 years

3 - 8 Lacs

Chennai

Hybrid

Key Responsibilities AWS Infrastructure Management Design, deploy, and manage AWS infrastructure using services such as EC2, ECS, EKS, Lambda, RDS, S3, VPC, and CloudFront Implement and maintain Infrastructure as Code using AWS CloudFormation, AWS CDK, or Terraform Optimize AWS resource utilization and costs through rightsizing, reserved instances, and automated scaling Manage multi-account AWS environments using AWS Organizations and Control Tower Implement disaster recovery and backup strategies using AWS services CI/CD Pipeline Development Build and maintain CI/CD pipelines using AWS CodePipeline, CodeBuild, CodeDeploy, and CodeCommit Integrate with third-party tools like Jenkins, GitLab CI, or GitHub Actions when needed Implement automated testing and security scanning within deployment pipelines Manage deployment strategies including blue-green deployments using AWS services Automate application deployments to ECS, EKS, Lambda, and EC2 environments Container and Serverless Management Deploy and manage containerized applications using Amazon ECS and Amazon EKS Implement serverless architectures using AWS Lambda, API Gateway, and Step Functions Manage container registries using Amazon ECR Optimize container and serverless application performance and costs Implement service mesh architectures using AWS App Mesh when applicable Monitoring and Observability Implement comprehensive monitoring using Amazon CloudWatch, AWS X-Ray, and AWS Systems Manager Set up alerting and dashboards for proactive incident management Configure log aggregation and analysis using CloudWatch Logs and AWS OpenSearch Implement distributed tracing for microservices architectures Create and maintain operational runbooks and documentation Security and Compliance Implement AWS security best practices using IAM, Security Groups, NACLs, and AWS Config Manage secrets and credentials using AWS Secrets Manager and Systems Manager Parameter Store Implement compliance frameworks and automated security scanning Configure AWS GuardDuty, AWS Inspector, and AWS Security Hub for threat detection Manage SSL/TLS certificates using AWS Certificate Manager Automation and Scripting Develop automation scripts using Python, Bash, and AWS CLI/SDK Create AWS Lambda functions for operational automation Implement event-driven automation using CloudWatch Events and EventBridge Automate backup, patching, and maintenance tasks using AWS Systems Manager Build custom tools and utilities to improve operational efficiency Required Qualifications AWS Expertise Strong experience with core AWS services: EC2, S3, RDS, VPC, IAM, CloudFormation Experience with container services (ECS, EKS) and serverless technologies (Lambda, API Gateway) Proficiency with AWS networking concepts and security best practices Experience with AWS monitoring and logging services (CloudWatch, X-Ray) Technical Skills Expertise in Infrastructure as Code using CloudFormation, CDK, or Terraform Strong scripting skills in Python, Bash, or PowerShell Experience with CI/CD tools, preferably AWS native services and Bitbucket Pipelines. Knowledge of containerization with Docker and orchestration with Kubernetes Understanding of microservices architecture and distributed systems Experience with configuration management and automation tools DevOps Practices Strong understanding of CI/CD best practices and GitOps workflows Experience with automated testing and deployment strategies Knowledge of monitoring, alerting, and incident response procedures Understanding of security scanning and compliance automation AWS Services Experience Compute & Containers Amazon EC2, ECS, EKS, Fargate, Lambda, Batch Storage & Database Amazon S3, EBS, EFS, RDS, DynamoDB, ElastiCache, Redshift Networking & Security VPC, Route 53, CloudFront, ALB/NLB, IAM, Secrets Manager, Certificate Manager Developer Tools CodePipeline, CodeBuild, CodeDeploy, CodeCommit, CodeArtifact Monitoring & Management CloudWatch, X-Ray, Systems Manager, Config, CloudTrail, AWS OpenSearch

Posted 3 weeks ago

Apply

5.0 - 8.0 years

15 - 22 Lacs

Hyderabad

Hybrid

Job Title: Java with AWS Location: Hyderabad Experience: 5+ years Notice Period: Immediate to 30 days Job Description: A Java/cloud engineer who is extremely passionate about development through the design, development, documentation, testing, modification, and maintenance of new and existing software applications supporting great web experiences. He/ She will apply modern best-practice techniques, procedures, and criteria to the development life cycle, especially in an Agile methodology, to translate business objectives and client needs into effective web interactive applications. Will provide subject matter and technology expertise on assigned applications to include interfaces and interrelationships with other applications, systems and departments. Job responsibilities: Write well designed, testable, efficient code by using the best software development practices. Maintain knowledge of evolving industry trends, practices, techniques, and standards Estimating different solutions and planning the deliverables. Collaborate with cross-commit teams for implementation of deliverables in all environments. Ensure compliance of technology solutions with architectural/security standards and participation in the full development life cycle of the delivered capability. Identify and solution key business and technology drivers that impact architectures, including end-user requirements, existing software distribution capabilities, existing application environment (including legacy and packaged systems), and performance/availability requirements. Performing code reviews and implementing best engineering practices. Collaborate with the QA team to identify test cases and create/mine test data to enable a thorough test of all development deliverables. Create configuration, build and test scripts for Continuous Integration environments Mandatory skills: Experience in systems analysis, design and an expert understanding of development, quality assurance and integration methodologies. 5+ years of professional experience coding and/or designing micro services utilizing modern development tools, frameworks, and best practices Expertise in Java, Spring Boot, Spring Framework Strong knowledge in micro services development with security/availability/performance aspects being thorough Good understanding of Spring framework (Spring MVC, Spring boot, Spring Security) Solid knowledge of cloud infrastructures like AWS, Docker etc. Good Working knowledge in AWS services (developing serverless functions in AWS Lambda, cloud formation, EKS, etc.) Excellent object oriented or functional analysis and design skills. Ability to perform root-cause analysis and identify opportunities to improve performance, reliability, and resource consumption. Experience working with a variety of databases such as MySQL, Oracle, DB2, PostgreSQL and MongoDB. Experience in unit testing Preferred qualifications: Experience or exposure to working on product or agile teams. Knowledge in DevOps software development/deployment practices (including CI and CD), source control and pipelines as code. Experience managing infrastructure as code Good knowledge of secure coding practices Experience managing infrastructure as code Knowledge in platform monitoring, log management and alerting solutions. Kindly share your updated resume at Sudhanshu.sinha@Ltimindtree.com with below details. Total Experience- Exp in AWS- Notice Period (Immediate to 30 days preferred)/ If serving kindly share your LWD- Current CTC- ECTC- Current Location- Willing to relocate to Hyderabad -

Posted 3 weeks ago

Apply

3.0 - 8.0 years

7 - 17 Lacs

Coimbatore

Remote

Role Overview As an AWS DevOps Engineer, youll own the end-to-end infrastructure lifecyclefrom design and provisioning through deployment, monitoring, and optimization. Youll collaborate closely with development teams to implement Infrastructure as Code, build robust CI/CD pipelines, enforce security and compliance guardrails, and integrate next-gen tools like Google Gemini for automated code-quality and security checks. Summary DevOps Engineer with 3+ years of experience in AWS infrastructure, CI/CD, and IaC, capable of designing secure, production-grade systems with zero-downtime deployments. The ideal candidate excels in automation, observability, and compliance within a collaborative engineering environment. Top Preferred Technologies: Terraform – core IaC tool for modular infrastructure design Amazon ECS/EKS (Fargate) – container orchestration and deployment GitHub Actions / AWS CodePipeline + CodeBuild – modern CI/CD pipelines Amazon CloudWatch – observability, custom metrics, and centralized logging IAM, KMS & GuardDuty – for access control, encryption, and threat detection SSM Parameter Store – for secure config and secret management Python / Bash / Node.js – for scripting, automation, and Lambda integration Key Responsibilities Infrastructure as Code (IaC): Design, build, and maintain Terraform (or CloudFormation) modules for VPCs, ECS/EKS clusters, RDS, ElastiCache, S3, IAM, KMS, and networking across multiple Availability Zones. Produce clear architecture diagrams (Mermaid or draw.io) and documentation. CI/CD Pipeline Development: Implement GitHub Actions or AWS CodePipeline/CodeBuild workflows to run linting, unit tests, Terraform validation, Docker builds, and automated deployments (zero-downtime rolling updates) to ECS/EKS. Integrate unit tests (Jest, pytest) and configuration-driven services (SSM Parameter Store). Monitoring & Alerting: Define custom CloudWatch metrics (latency, error rates), create dashboards, and centralize application logs in CloudWatch Logs with structured outputs and PII filtration. Implement CloudWatch Alarms with SNS notifications for key thresholds (CPU, replica lag, 5xx errors). Security & Compliance: Enable and configure GuardDuty and AWS Config rules (e.g., public-CIDR security groups, unencrypted S3 or RDS). Enforce least-privilege IAM policies, key-management with KMS, and secure secret storage in SSM Parameter Store. Innovative Tooling Integration: Integrate Google Gemini (or similar) into the CI pipeline for automated Terraform security scans and generation of actionable “security reports” as PR comments. Documentation & Collaboration: Maintain clear README files, module documentation, and step-by-step deployment guides. Participate in code reviews, design discussions, and post-mortems to continuously improve our DevOps practices. Required Qualifications Experience: 3+ years in AWS DevOps or Site Reliability Engineering roles, designing and operating production-grade cloud infrastructure. Technical Skills: Terraform (preferred) or CloudFormation for IaC. Container orchestration: ECS/Fargate or EKS with zero-downtime deployments. CI/CD: GitHub Actions, AWS CodePipeline, and CodeBuild (linting, testing, Docker, Terraform). Monitoring: CloudWatch Dashboards, custom metrics, log centralization, and alarm configurations. Security & Compliance: IAM policy design, KMS, GuardDuty, AWS Config, SSM Parameter Store. Scripting: Python, Bash, or Node.js for automation and Lambda functions. Soft Skills: Strong problem-solving mindset and attention to detail. Excellent written and verbal communication for documentation and cross-team collaboration. Ability to own projects end-to-end and deliver under tight timelines. Wil have to attend Coimbatore office on request (Hybrid) Preferred Qualifications Hands-on experience integrating third-party security or code-analysis APIs (e.g., Google Gemini, Prisma Cloud). Familiarity with monitoring and observability best practices, including custom metric creation. Exposure to multi-cloud environments or hybrid cloud architectures. Certification: AWS Certified DevOps Engineer – Professional or AWS Certified Solutions Architect – Associate.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

8 - 12 Lacs

Bengaluru

Work from Office

Job Title: IT Engineer -DevOps Location: [Bangalore] Job Description: We are looking for a motivated IT Engineer- DevOps to assist in managing and optimizing our cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate should have foundational knowledge in Kubernetes, ArgoCD, AWS (EKS, ECR, ECS), Helm Charts, Docker, Terraform, Git, Jenkins, and JFrog . This role is a great opportunity to learn from senior engineers and grow in a dynamic DevOps environment. The ideal candidate is also expected to be a motivated self-starter with a proactive approach to resolving problems and issues with minimal supervision Key Responsibilities: Support the management and deployment of Kubernetes clusters using ArgoCD . Assist with CI/CD pipeline development and automation using Jenkins . Help manage AWS services, including EKS, ECR, ECS . Develop and maintain Helm Charts for application deployments. Work with Docker for containerization and deployment workflows. Assist in writing and optimizing Terraform scripts for infrastructure automation. Utilize Git for version control and collaboration. Support JFrog Artifactory for artifact management and package distribution. Monitor system performance and assist in troubleshooting issues. Stay up to date with DevOps best practices and contribute to process improvements. Roles and Responsibilities Qualifications: Bachelor's degree in Computer Science, Information Security, or a related field (or equivalent experience). 3-5 years of experience in DevOps or a similar role, with a strong security focus. Preferred AWS Certified Cloud Practitioner certification or similar. Knowledge in cloud platforms (AWS)(Azure – Good to have) and containerization technologies (Docker, Kubernetes) with a key focus on AWS and EKS, ECS. Experience with infrastructure as code (IaC) tools such as Terraform. Proficiency in CI/CD tools like AWS CodePipeline, Jenkins, Azure DevOps Server Familiarity with programming and scripting languages (e.g., Python, Bash, Go, Bash). Excellent problem-solving skills and the ability to work in a fast-paced, collaborative environment. Strong communication skills, with the ability to convey complex security concepts to technical and non-technical stakeholders. Preferred Qualifications: Strong understanding and working experience with enterprise applications, containerized application workloads. Knowledge of networking concepts Knowledge of network security principles and technologies (e.g., Firewalls, VPNs, IDS/IPS).

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

noida, uttar pradesh

On-site

As a highly experienced and motivated Backend Solution Architect, you will be responsible for leading the design and implementation of robust, scalable, and secure backend systems. Your expertise in Node.js and exposure to Python will be crucial in architecting end-to-end backend solutions using microservices and serverless frameworks. You will play a key role in ensuring scalability, maintainability, and security, while also driving innovation through the integration of emerging technologies like AI/ML. Your primary responsibilities will include designing and optimizing backend architecture, managing AWS-based cloud solutions, integrating AI/ML components, containerizing applications, setting up CI/CD pipelines, designing and optimizing databases, implementing security best practices, developing APIs, monitoring system performance, and providing technical leadership and collaboration with cross-functional teams. To be successful in this role, you should have at least 8 years of backend development experience with a minimum of 4 years as a Solution/Technical Architect. Your expertise in Node.js, AWS services, microservices, event-driven architectures, Docker, Kubernetes, CI/CD pipelines, authentication/authorization mechanisms, and API development will be critical. Additionally, hands-on experience with AI/ML workflows, React, Next.js, Angular, and AWS Solution Architect Certification will be advantageous. At TechAhead, a global digital transformation company, you will have the opportunity to work on cutting-edge AI-first product design thinking and bespoke development solutions. By joining our team, you will contribute to shaping the future of digital innovation worldwide and driving impactful results with advanced AI tools and strategies.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

You are a skilled Senior AWS DevOps Engineer with 5 to 8 years of experience in DevOps, cloud computing, and infrastructure engineering. You will play a crucial role in our team by leveraging your expertise in AWS cloud services, infrastructure automation, CI/CD pipelines, and security best practices to design, implement, and manage scalable, secure, and reliable cloud-based solutions. Your responsibilities will include architecting, building, and maintaining highly scalable AWS infrastructure, managing CI/CD pipelines using tools like Jenkins, Bitbucket, or AWS CodePipeline, and developing Infrastructure as Code (IaC) using Terraform, CloudFormation, or AWS CDK. You will automate deployment, monitoring, and scaling of applications and infrastructure while optimizing cloud costs and performance through effective resource management and scaling strategies. As a Senior AWS DevOps Engineer, you will also manage Kubernetes clusters (EKS) and containerized applications using Docker, monitor system performance, troubleshoot issues, and enforce security best practices such as IAM policies, network security, and compliance with industry standards. Collaboration with developers, architects, and security teams will be essential to enhance DevOps best practices and drive continuous improvement in deployment efficiency and system resilience. To excel in this role, you should possess expertise in AWS services like EC2, S3, Lambda, RDS, IAM, VPC, CloudWatch, ECS, and EKS, proficiency in IaC tools, strong knowledge of Kubernetes and container orchestration, and proficiency in scripting and automation using languages like Python, Bash, or Go. Experience with CI/CD pipelines, monitoring and logging tools, networking, security best practices, IAM policies, and configuration management tools will be beneficial. Experience in Agile/Scrum development environments and AWS certifications such as AWS Certified DevOps Engineer Professional are preferred qualifications. Knowledge of serverless architectures and Service Mesh architectures will also be advantageous for this role. If you are a proactive problem solver with a passion for optimizing cloud performance and cost, we look forward to welcoming you to our team as our Senior AWS DevOps Engineer.,

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Hyderabad

Work from Office

Job Summary: We are seeking a skilled and experienced DevOps Engineer with 3+ years of expertise in AWS EKS to manage, optimize, and scale our containerized microservices infrastructure. You will be responsible for end-to-end deployment, monitoring, security, and automation of our EKS-based workloads, ensuring high availability, performance, and security compliance. Key Responsibilities: Design, deploy, and manage AWS EKS clusters in production environments. Experience with GitHub trunk-based development workflows. Manage Kubernetes workloads (Deployments, Services, ConfigMaps, Secrets, Ingress, etc.). Implement and manage CI/CD pipelines using GitHub Actions / Jenkins / Argo CD / CodePipeline. Deploy and maintain Helm charts for microservices, and manage service discovery and routing. Configure and maintain Kubernetes Ingress controllers (ALB/NGINX), cert-manager , and external-dns . Ensure secure secret management using AWS Secrets Manager / External Secrets . Monitor EKS clusters and services using Prometheus, Grafana, and AWS CloudWatch . Configure and optimize logging pipelines using Fluent Bit / CloudWatch Logs . Use Karpenter or Cluster Autoscaler for node scaling, and implement HPA for workload scaling. Enforce policies and security best practices using IAM, OPA Gatekeeper/Kyverno, and KMS . Handle cost optimization, alerting, and compliance using AWS Budgets, Cost Explorer, and GuardDuty . Collaborate with development teams to containerize and onboard microservices using Docker + ECR . Troubleshoot performance, network, and application-level issues across distributed environments. Required Skills & Experience: 3+ years hands-on with AWS EKS and Kubernetes ecosystem . Strong experience with Terraform, Helm, Docker, GitOps (Argo CD/Flux) . Deep understanding of AWS services: VPC, IAM, EC2, EBS, ELB, CloudWatch, Secrets Manager, S3, Route 53 . Hands-on experience with Prometheus, Grafana, Fluent Bit , and AWS monitoring tools . Experience with multi-environment (dev/stage/prod) setups and blue-green / canary deployments . Solid knowledge of DevSecOps practices , IAM policies, and Kubernetes RBAC. Experience with troubleshooting microservices, pods, and networking in EKS . Scripting in Bash, Python, or Go for automation and tooling. Mandatory: Strong understanding and practical experience with GitHub trunk-based development workflows , including feature flags, short-lived branches, protected main branches, and continuous integration best practices . Good to Have: Experience with service mesh (Istio, App Mesh, Linkerd). Familiarity with OpenTelemetry / Jaeger for distributed tracing. Exposure to multi-tenant EKS clusters and cost reporting per namespace/team. Certifications: AWS Certified DevOps Engineer / Solutions Architect Associate .

Posted 3 weeks ago

Apply

4.0 - 9.0 years

14 - 24 Lacs

Pune

Remote

Exp with site/log monitoring tools, specifically Datadog/Dynatrace Exp Serverless and Cloud Formation/Terraform,EKS Node, Cluster, AWS Native Service Exp with Harness/Teamcity mandatory. Exp with running production systems on AWS, APM space

Posted 3 weeks ago

Apply

7.0 - 12.0 years

25 - 35 Lacs

Hyderabad, Chennai

Hybrid

Hiring for AWS Devops Enginer immediate joiners preferred. Preferred candidate profile AWS - EKS Cluster setup, scaling, node groups, IAM roles, ingress AWS - EC2 AMI usage, instance lifecycle, auto-scaling AWS - EBS Volume usage, snapshots, reattachment AWS - S3 Lifecycle policies, usage with backups/CI-CD AWS - RDS Provisioning, backups, monitoring, failover AWS - SNS Notifications, topic-subscription configuration CI/CD - Jenkins Job configs, pipelines, shared libraries CI/CD - GitHub Repo structure, branch policy, PR reviews CI/CD - Terraform State management, modules, remote backend CI/CD - Argo Rollouts Canary, blue-green strategies, Istio integration K8s - Helm Charts Custom charts, chart repo usage, secrets templating K8s - Istio Gateway, mTLS, observability setup Monitoring - Datadog Dashboards, log ingestion, alerts, APM Automation - Scripts Provisioning, scaling, log rotation, backups Automation - CRON jobs Scheduled tasks for automation Linux Admin - User Access Access controls, patching Linux Admin - Performance Monitoring System metrics, troubleshooting

Posted 3 weeks ago

Apply

3.0 - 4.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

An AWS DevOps Architect designs and manages the DevOps environment for an organization. They ensure that software development and IT operations are integrated seamlessly. Responsibilities DevOps strategy: Develop and implement the DevOps strategy and roadmap Automation: Automate the provisioning, configuration, and management of infrastructure components Cloud architecture: Design and manage the cloud and infrastructure architecture Security: Implement security measures and compliance controls Collaboration: Foster collaboration between development, operations, and other cross-functional teams Continuous improvement: Regularly review and analyze DevOps processes and practices Reporting: Provide regular reports on infrastructure performance, costs, and security to management Skills and experience Experience with AWS services like ECS, EKS, and Kubernetes Knowledge of scripting languages like Python Experience with DevOps tools and technologies like Jenkins, Terraform, and Ansible Experience with CI/CD pipelines Experience with cloud governance standards and best practices PS we need strong DevOps Tool Implementation Experts on AWS platform ( Jenkins, Terrraform and other DevOps Tool)

Posted 3 weeks ago

Apply

6.0 - 11.0 years

2 - 2 Lacs

Ahmedabad

Work from Office

Brief Description We are seeking an innovative DevOps Team Leader to join our team! This person would be taking over, as a technical leader, an existing team of DevOps engineers that is responsible for managing cloud platform automation, developing and operating CI / CD pipelines, performance, availability and reliability of our cloud hosted applications. The Opportunity As a DevOps Team Lead, your primary responsibilities would be for guiding the team through established projects to ensure successful outcomes and timelines. The team is responsible for managing cloud platform automation, developing and operating CI / CD pipelines, performance, availability and reliability of our cloud hosted applications. The ideal candidate will be accountable for managing all cloud infrastructure from the point of project builds to production deployment. Your responsibilities on these systems will include but will not be limited to engineering, deployment, provisioning, and maintenance of the server infrastructure, as well as research and development to ensure continual innovation. As a lead level position, you will additionally be expected to offer guidance to junior members of the team, offer mentoring, and technical performance reviews. Responsibilities Architect infrastructure in AWS Own and drive projects development to delivery Work closely with software engineers and tech leadership to drive projects Build and run production environments in AWS using Infrastructure as code methods Orchestrate environment development using Terraform and Ansible Build & Deploy CI pipeline automation to a combination of Kubernetes/Docker and VM based systems in AWS Work closely with System Architects, Engineers, Product Managers and System Administrators to meet their environment setup and service automation needs Develop processes to make DevOps as part of the engineering development, service deployment and operations lifecycle Develop scripts for deployment, troubleshooting, automation, and regular maintenance System monitoring & analytics Regularly review work product and progress of team members to ensure best practices are being followed Conduct team training sessions on technology and features used by Maruti Provide guidance and mentor junior DevOps engineers Comply with Company Information Security policies and procedures and report any incident that is related to information security

Posted 3 weeks ago

Apply

7.0 - 12.0 years

9 - 14 Lacs

Bengaluru

Work from Office

FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity Join our dynamic and forward-thinking Platform Engineering team at a world-class analytics company. Our mission is to accelerate innovation by delivering a cohesive internal developer platform that combines an enterprise-grade Spotify Backstage portal, Buf Schema Registry, GitOps automation, and cloud-native tooling. As a Lead Platform Engineer, youll architect and own the services, plugins, and pipelines that power a world-class developer experience for thousands of engineers building fraud, risk, marketing, and customer-management solutions. Sr. Director, 1ES Engineering What Youll Contribute Operate and scale Backstage as the single pane of glass for developers. Design and publish custom Backstage plugins, templates, and software catalog integrations that reduce cognitive load and surface business context. Define governance & RBAC models for Backstage groups, entities, and APIs. Establish and maintain BSR as the system of record for Protobuf and gRPC APIs. Automate linting, breaking-change detection, versioning, and dependency insights in CI/CD. Integrate BSR metadata into Backstage to provide full API lineage and documentation. Collaborate with product and infrastructure teams to deliver resilient, self-service platform building blocks. Own GitHub Actions, Argo CD, Crossplane, and policy-as-code workflows that enable secure, audit-ready deployments. Continuously experiment with new ideashack days, proofs-of-concept, and brown-bag sessionsto push the envelope of DevEx. Champion data-driven improvements using DORA/SPACE metrics and developer feedback loops. Instrument, monitor, and tune platform components for scale (Prometheus/Grafana, Splunk, Cribl, CloudWatch). Embed security controls (SCA, SAST, OPA/Kyverno) early in the SDLC. Guide engineers across domains, codify best practices, and foster a culture of psychological safety, creativity, and ownership. What Were Seeking Deep Backstage ExpertiseProven experience deploying, customizing, and scaling Backstage in production, including authoring plugins (React/Node), scaffolder templates, and catalog processors. Buf Schema Registry MasteryHands-on knowledge of managing API contracts in BSR, enforcing semantic versioning, and integrating breaking-change gates into CI/CD. Cloud-Native & GitOps ProficiencyKubernetes (EKS/GKE/AKS), Argo CD, Crossplane, Docker, Helm; expert-level GitHub Actions workflow design. Programming Skills: Strong in TypeScript/JavaScript (for Backstage), plus one or more of Go, Python, or NodeJS for platform services. Infrastructure as Code & AutomationTerraform, Pulumi, or Ansible to codify cloud resources and policies. Observability & Incident ManagementPrometheus, Grafana, Datadog, PagerDuty; ability to design SLOs/SLA dashboards. Creative Problem-Solving & Growth MindsetDemonstrated ability to think big, prototype quickly, and iterate based on data and feedback. Excellent Communication & CollaborationClear written and verbal skills; ability to translate technical details to diverse stakeholders. Education / ExperienceBachelors in Computer Science or equivalent experience; 7 + years in platform, DevOps, or developer-experience roles, with 2 + years focused on Backstage and/or BSR. Our Offer to You An inclusive culture strongly reflectingourcore valuesAct Like an Owner, DelightOurCustomers and Earn the Respect of Others. The opportunitytomake an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourageyoutobring yourbest every day and be recognized for doing so. An engaging, people-first work environmentoffering work/life balance, employee resource groups, and social eventstopromote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 3 weeks ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Bengaluru

Work from Office

FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity "We are seeking an experienced DevOps Engineer to join our development team to assist in the continuing evolution of our Platform Orchestration product. You will be able to demonstrate the required potential and technical curiosity to work on software that utilizes a range of leading-edge technologies and integration frameworks. Staff training, investment and career growth form an important part of our team ethos. Consequently, you will gain exposure to different software validation techniques supported by industry-standard engineering processes that will help to grow your skills and experience." - VP, Software Engineering. What Youll Contribute Build and maintain CI/CD pipelines for multi-tenant deployments using Jenkins and GitOps practices. Manage Kubernetes infrastructure (AWS EKS), Helm charts, and service mesh configurations (ISTIO). Use kubectl, Lens, or other dashboards for real-time workload inspection and troubleshooting. Evaluate security, stability, compatibility, scalability, interoperability, monitorability, resilience, and performance of our software. Support development and QA teams with code merge, build, install, and deployment environments. Ensure continuous improvement of the software automation pipeline to increase build and integration efficiency. Oversee and maintain the health of software repositories and build tools, ensuring successful and continuous software builds. Verify final software release configurations, ensuring integrity against specifications, architecture, and documentation. Perform fulfillment and release activities, ensuring timely and reliable deployments. What Were Seeking A Bachelors or Masters degree in Computer Science, Engineering, or a related field. 812 years of hands-on experience in DevOps or SRE roles for cloud-native Java-based platforms. Deep knowledge of AWS Cloud Services (EKS, IAM, CloudWatch, S3, Secrets Manager), including networking and security components. Strong experience with Kubernetes, Helm, ConfigMaps, Secrets, and Kustomize. Expertise in authoring and maintaining Jenkins pipelines integrated with security and quality scanning tools. Hands-on experience with infrastructure provisioning tools such as Docker and CloudFormation. Familiarity with CI/CD pipeline tools and build systems including Jenkins and Maven. Experience administering software repositories such as Git or Bitbucket. Proficient in scripting/programming languages such as Ruby, Groovy, and Java. Proven ability to analyze and resolve issues related to performance, scalability, and reliability. Solid understanding of DNS, Load Balancing, SSL, TCP/IP, and general networking and security best practices. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 3 weeks ago

Apply

7.0 - 12.0 years

30 - 45 Lacs

Hyderabad

Hybrid

Key Skills : Python, Jenkins, CI/CD, Bash, PowerShell, Docker, EKS, ECS, AWS, SumoLogic, Asset Management Domain, Cloud Certification, Automation, DevOps. Roles & Responsibilities : Develop and support automation scripts and solutions using Python or other object-oriented programming languages. Implement and manage CI/CD pipelines using Jenkins and other related tools. Collaborate with distributed teams to deliver high-quality, scalable, and secure software solutions. Support containerization using Docker and orchestration using EKS or ECS. Monitor and troubleshoot systems using tools like SumoLogic. Ensure alignment with DevOps best practices in cloud environments. Experience Requirement : 7-14 years of experience in Python application development or object-oriented programming. Hands-on experience with scripting languages such as Python, Bash, or PowerShell. Strong understanding of CI/CD pipelines, containerization, and cloud infrastructure. Prior experience in geographically dispersed teams and Agile environments. Experience working in or with the Asset Management domain is preferred. Education : Any Post Graduation, Any Graduation.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

We are seeking a Senior DevOps Engineer with 35 years of hands-on experience in cloud infrastructure and DevOps practices. The role involves designing, implementing, and maintaining AWS cloud infrastructure, managing containerized applications using Amazon EKS and Kubernetes, and developing CI/CD pipelines with Jenkins, Azure DevOps, and Argo CD. The ideal candidate will have expertise in Infrastructure as Code (IaC) tools such as Terraform or CloudFormation, strong scripting skills (e.g., Python, Bash), and a deep understanding of AWS services like EC2, S3, and RDS. Candidates with experience in Financial Services Industry (FSI) or regulated environments are preferred. This is a full-time, 6-month on-site role in Bengaluru, with a requirement for immediate joiners or a notice period of 15 days.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of a Consultant-AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self-starter, you are up-to-speed with the latest developments in the tech world. Responsibilities Hands-On experience & good skills on AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. Must have good working knowledge on Kubernetes & Dockers. Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. Develop scripts, utilities, and automation tools to facilitate the migration process and ensure compatibility with AWS services. Implement best practices for security, scalability, and fault tolerance in AWS-based solutions. Experience in AWS Cost Analysis & thorough understanding to optimize AWS Cost. Must have good working knowledge on deployment templates like Terraform%5CCloud formation. Ability to multi-task and manage various project elements simultaneously. Qualifications we seek in you! Minimum Qualifications / Skills Bachelor&rsquos Degree with experience in Information Technology. Must have experience in AWS Platform Services. Preferred Qualifications/ Skills Very good written and presentation / verbal communication skills with experience of customer interfacing role. In-depth requirement understanding skills with good analytical and problem-solving ability, interpersonal efficiency, and positive attitude. Experience in ML/ AI Experience in the telecommunication industry Experience with cloud providers (e.g., AWS, GCP) Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

We are seeking a Senior DevOps Engineer with 35 years of hands-on experience in cloud infrastructure and DevOps practices. The role involves designing, implementing, and maintaining AWS cloud infrastructure, managing containerized applications using Amazon EKS and Kubernetes, and developing CI/CD pipelines with Jenkins, Azure DevOps, and Argo CD. The ideal candidate will have expertise in Infrastructure as Code (IaC) tools such as Terraform or CloudFormation, strong scripting skills (e.g., Python, Bash), and a deep understanding of AWS services like EC2, S3, and RDS. Candidates with experience in Financial Services Industry (FSI) or regulated environments are preferred. This is a full-time, 6-month on-site role in Bengaluru, with a requirement for immediate joiners or a notice period of 15 days.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies