Jobs
Interviews

638 Eks Jobs - Page 13

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Description: We are seeking an experienced DevOps Architect to lead the design, implementation, and management of scalable DevOps solutions and cloud infrastructure. The ideal candidate will have a strong background in CI/CD pipeline development, infrastructure automation, container orchestration, and AWS cloud services, with hands-on expertise in tools such as Jenkins, Ansible, Terraform, Kubernetes, and AWS EKS/ECS. Requirements: Job Description AWS (VPC/ECS/EC2/CloudFormation/RDS) Artifactory Some knowledge with CircleCI/Saltstack is preferred but not required Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure Must Have: Bachelors Degree, with at least 7+ year experience in DevOps Should have worked on various DevOps tools like: GitLab, Jenkins, SonarQube, Nexus, Ansible etc. Should have worked on various AWS Services -EC2, S3, RDS, CloudFront, CloudWatch, CloudTrail, Route53, ECS, ASG, Route53 etc. Well-versed with shell/python scripting & Linux Well-versed with Web-Servers (Apache, Tomcat etc) Well-versed with containerized application (Docker, Docker-compose, Docker-swarm, Kubernetes) Have worked on Configuration management tools like Puppet, Ansible etc. Have experience in CI/CD implementation (Jenkins, Bamboo, etc..) Self-starter and ability to deliver under tight timelines Good to have: Exposure to various tools like New Relic, ELK, Jira, confluence etc Prior experience in managing infrastructure for public facing web-applications. Prior experience in handling client communications Basic Networking knowledge – VLAN, Subnet, VPC, etc. Knowledge of databases (PostgreSQL). Key Skills- Must have Jenkins, Docker, Python, Groovy, Shell-Scripting, Artifactory, Gitlab, Terraform, VM Ware,PostgreSQL, AWS, Kafka Job Responsibilities: Responsibilities: Manage containerized applications using kubernetes, Docker, etc. Automate Build/deployments (CI&CD) & other repetitive tasks using shell/Python scripts or tools like Ansible, Jenkins, etc Coordinate with development teams to fix issues, release new code Setup configuration management using tools like Ansible etc. Implement High available, auto-scaling, Fault tolerant, secure setup Implement automated jobs tasks like backups, cleanup, start-stop, reports. Configure monitoring alerts/alarms and act on any outages/incidents Ensure that the infrastructure is secured and can be accessed from limited IPs and ports Understand client requirements propose solutions and ensure delivery Innovate and actively look for improvements in overall infrastructure What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!

Posted 1 month ago

Apply

1.0 - 3.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Job Overview: We are looking for a highly skilled AppDynamics Consultant with a strong background in application performance monitoring, cloud-native technologies, and end-to-end observability. The ideal candidate will have hands-on experience with AppDynamics components, instrumentation techniques, and certified expertise. You will be responsible for enabling proactive monitoring solutions in hybrid and multi-cloud environments, working closely with cross-functional teams, including SRE, DevOps, and Engineering. Key Responsibilities: Lead the end-to-end implementation of AppDynamics across enterprise applications and cloud workloads. Instrument and configure AppDynamics agents for Java, .NET, Node.js, PHP, Python, and database tiers. Design and deploy Application Flow Maps, Business Transactions, Health Rules, and Policies. Create and maintain custom dashboards, analytics queries, and synthetic monitoring scripts. Develop SLIs/SLOs and integrate them into AppDynamics Dash Studio and external observability platforms. Tune performance baselines, anomaly detection, and alert thresholds. Collaborate with Cloud Architects and SRE teams to align monitoring with cloud-native best practices. Provide technical workshops, knowledge transfer sessions, and documentation for internal and external stakeholders. Integrate AppDynamics with CI/CD pipelines, incident management tools (e.g., ServiceNow, PagerDuty), and cloud-native telemetry. Required AppDynamics Expertise: Strong hands-on experience in: Controller administration (SaaS or On-Prem) Agent configuration for APM, Infrastructure Visibility, Database Monitoring, and End-User Monitoring. Analytics and Business iQ Service endpoints, data collectors, and custom metrics Experience in AppDynamics Dash Studio and Advanced Dashboards Deep understanding of transaction snapshots, call graphs, errors, and bottleneck analysis Knowledge of AppDynamics APIs for automation and custom integrations Ability to troubleshoot agent issues, data gaps, and controller health Mandatory Cloud & DevOps Skills: Hands-on experience with at least one major cloud platform: AWS (EC2, ECS/EKS, Lambda, CloudWatch, CloudFormation) Azure (App Services, AKS, Functions, Azure Monitor) GCP (GKE, Compute Engine, Cloud Operations Suite) Experience in containerized environments (Kubernetes, Docker) Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions) Scripting skills (Shell, Python, or PowerShell) for automation and agent deployment Experience with Infrastructure as Code (Terraform, CloudFormation) Preferred Skills: Integration with OpenTelemetry, Grafana, Prometheus, or Splunk Experience with Full-Stack Monitoring (APM + Infra + Logs + RUM + Synthetic) Knowledge of Service Reliability Engineering (SRE) practices and error budgets Familiarity with ITSM tools and alert routing mechanisms Understanding of business KPIs and mapping them to technical metrics Certifications (Preferred or Required): AppDynamics Certified Associate / Professional / Implementation Professional Cloud certifications: AWS Certified Solutions Architect / DevOps Engineer Microsoft Certified: Azure Administrator / DevOps Engineer Google Associate Cloud Engineer / Professional Cloud DevOps Engineer Kubernetes (CKA/CKAD) or equivalent is a plus Education: Bachelor s degree in computer science, IT, or related field Master s Degree (optional but preferred)

Posted 1 month ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Job Overview: We are looking for a highly skilled AppDynamics Consultant with a strong background in application performance monitoring, cloud-native technologies, and end-to-end observability. The ideal candidate will have hands-on experience with AppDynamics components, instrumentation techniques, and certified expertise. You will be responsible for enabling proactive monitoring solutions in hybrid and multi-cloud environments, working closely with cross-functional teams, including SRE, DevOps, and Engineering. Key Responsibilities: Lead the end-to-end implementation of AppDynamics across enterprise applications and cloud workloads. Instrument and configure AppDynamics agents for Java, .NET, Node.js, PHP, Python, and database tiers. Design and deploy Application Flow Maps, Business Transactions, Health Rules, and Policies. Create and maintain custom dashboards, analytics queries, and synthetic monitoring scripts. Develop SLIs/SLOs and integrate them into AppDynamics Dash Studio and external observability platforms. Tune performance baselines, anomaly detection, and alert thresholds. Collaborate with Cloud Architects and SRE teams to align monitoring with cloud-native best practices. Provide technical workshops, knowledge transfer sessions, and documentation for internal and external stakeholders. Integrate AppDynamics with CI/CD pipelines, incident management tools (e.g., ServiceNow, PagerDuty), and cloud-native telemetry. Required AppDynamics Expertise: Strong hands-on experience in: Controller administration (SaaS or On-Prem) Agent configuration for APM, Infrastructure Visibility, Database Monitoring, and End-User Monitoring. Analytics and Business iQ Service endpoints, data collectors, and custom metrics Experience in AppDynamics Dash Studio and Advanced Dashboards Deep understanding of transaction snapshots, call graphs, errors, and bottleneck analysis Knowledge of AppDynamics APIs for automation and custom integrations Ability to troubleshoot agent issues, data gaps, and controller health Mandatory Cloud & DevOps Skills: Hands-on experience with at least one major cloud platform: AWS (EC2, ECS/EKS, Lambda, CloudWatch, CloudFormation) Azure (App Services, AKS, Functions, Azure Monitor) GCP (GKE, Compute Engine, Cloud Operations Suite) Experience in containerized environments (Kubernetes, Docker) Familiarity with CI/CD pipelines (Jenkins, GitLab, GitHub Actions) Scripting skills (Shell, Python, or PowerShell) for automation and agent deployment Experience with Infrastructure as Code (Terraform, CloudFormation) Preferred Skills: Integration with OpenTelemetry, Grafana, Prometheus, or Splunk Experience with Full-Stack Monitoring (APM + Infra + Logs + RUM + Synthetic) Knowledge of Service Reliability Engineering (SRE) practices and error budgets Familiarity with ITSM tools and alert routing mechanisms Understanding of business KPIs and mapping them to technical metrics Certifications (Preferred or Required): AppDynamics Certified Associate / Professional / Implementation Professional Cloud certifications: AWS Certified Solutions Architect / DevOps Engineer Microsoft Certified: Azure Administrator / DevOps Engineer Google Associate Cloud Engineer / Professional Cloud DevOps Engineer Kubernetes (CKA/CKAD) or equivalent is a plus Education: Bachelors degree in computer science, IT, or related field Masters Degree (optional but preferred)

Posted 1 month ago

Apply

3.0 - 8.0 years

8 - 18 Lacs

Kolkata, Mumbai (All Areas)

Work from Office

Job Title-AWS Java Developer Experience-3 to 8 Years Job Locations-Kolkata and Mumbai Notice Period-30 Days Must be hands-on for AWS solutions using Java knowledge in Oracle for application development project. Essential Skills: Java AWS Services - Lambda, DynamoDB, EKS, SQS, Step Function, Linux Oracle DB - PL/SQL, SQL Desired Skills: AWS Services - RDS, Aurora, Kafka, Document DB Kinesis, Elastic Search

Posted 1 month ago

Apply

5.0 - 9.0 years

6 - 9 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Experience : 5 + years Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote,New Delhi,Bengaluru,Mumbai We are seeking a seasoned DevOps Architect / Senior Engineer with deep expertise in AWS, EKS, Terraform, Infrastructure as Code, and MongoDB Atlas to lead the design, implementation, and management of our cloud-native infrastructure. This is a hands-on leadership role focused on ensuring the scalability, reliability, security, and efficiency of our production-grade systems. Key Responsibilities : Cloud Infrastructure Design & Management (AWS) Architect, build, and manage secure, scalable AWS infrastructure (VPC, EC2, S3, IAM, Security Groups). Implement secure cloud networking and ensure high availability. Monitor, optimize, and troubleshoot AWS environments. Container Orchestration (AWS EKS) Deploy and manage production-ready EKS clusters, including workload deployments, scaling (manual and via Karpenter), monitoring, and security. Maintain CI/CD pipelines for Kubernetes applications. Infrastructure as Code (IaC) Lead development of Terraform-based IaC modules (clean, reusable, and secure). Manage Terraform state and promote best practices (modularization, code reviews). Extend IaC to multi-cloud (Azure, GCP) and leverage CloudFormation or Bicep when needed. Programming, Automation & APIs Develop automation scripts using Python, Bash, or PowerShell. Design, secure, and manage APIs (AWS API Gateway, optionally Azure API Management). Integrate systems/services via APIs and event-driven architecture. Troubleshoot and resolve infrastructure or deployment issues. Database Management Administer MongoDB Atlas: setup, configuration, performance tuning, backup, and security. Implement best practices for high availability and resilience. DevOps Leadership & Strategy Define and promote DevOps best practices across the organization. Automate and streamline development-to-deployment workflows. Mentor junior engineers and foster a culture of technical excellence. Stay ahead of emerging DevOps and Cloud trends. Mandatory Skills : Cloud Administration (AWS) VPC design (subnets, route tables, NAT/IGW, peering). IAM (users, roles, policies with least-privilege enforcement). Deep AWS service knowledge and administrative experience. Container Orchestration (AWS EKS) EKS production-grade cluster setup and upgrades. Workload autoscaling using Karpenter. Logging/Monitoring via Prometheus, Grafana, CloudWatch. Secure EKS practices: RBAC, PSP/PSA, admission controllers, secret management. CI/CD & Kubernetes Experience with Jenkins, GitLab CI, ArgoCD, Flux. Microservices deployment and Kubernetes cluster federation knowledge. Infrastructure as Code Expert in Terraform (HCL, modules, backends, security). Familiarity with CloudFormation, Bicep for cross-cloud support. Git-based version control and CI/CD integration. Automated infrastructure provisioning. Programming & API Proficient in Python, Bash, PowerShell. Secure API design, development, and management. Database Management Proven MongoDB Atlas administration: scaling, backups, alerts, and performance monitoring. Good to Have Skills : Infrastructure & OS Server & Virtualization Management (Linux/Windows). OS Security Hardening & Automation. Disaster Recovery planning and implementation. Docker containerization. Networking & Security Advanced networking (DNS, BGP, routing). Software Defined Networking (SDN), hybrid networking. Zero Trust Architecture. Load balancer (ALB/ELB/NLB) security and WAF management. Compliance: ISO 27001, SOC 2, PCI-DSS. Secrets management (Vault, AWS Secrets Manager). Observability & Automation OpenTelemetry, LangTrace for observability. AI-powered automation (e.g., CrewAI). SIEM/Security monitoring. Cloud Governance Cost optimization strategies. AWS Well-Architected Framework familiarity. Incident response, governance, and compliance management. Qualifications & Experience Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years in DevOps / SRE / Cloud Engineering with AWS focus. 5+ years hands-on experience with EKS and Terraform. Proven experience with cloud-native architecture and automation. AWS Certifications (DevOps Engineer Pro, Solutions Architect Pro) preferred. Agile/Scrum experience a plus.

Posted 1 month ago

Apply

7.0 - 11.0 years

9 - 12 Lacs

Mumbai, Bengaluru, Delhi

Work from Office

Experience : 7.00 + years Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Must have skills required: DevOps, PowerShell, CLI, Amazon AWS, Java, Scala, Go (Golang), Terraform Opportunity Summary: We are looking for an enthusiastic and dynamic individual to join Upland India as a DevOps Engineer in the Cloud Operations Team. The individual will manage and monitor our extensive set of cloud applications. The successful candidate will possess extensive experience with production systems with an excellent understanding of key SaaS technologies as well as exhibit a high amount of initiative and responsibility. The candidate will participate in technical/architectural discussions supporting Uplands product and influence decisions concerning solutions and techniques within their discipline. What would you do Be an engaged, active member of the team, contributing to driving greater efficiency and optimization across our environments. Automate manual tasks to improve performance and reliability. Build, install, and configure servers in physical and virtual environments. Participate in an on-call rotation to support customer-facing application environments. Monitor and optimize system performance, taking proactive measures to prevent issues and reactive measures to correct them. Participate in the Incident, Change, Problem, and Project Management programs and document details within prescribed guidelines. Advise technical and business teams on tactical and strategic improvements to enhance operational capabilities. Create and maintain documentation of enterprise infrastructure topology and system configurations. Serve as an escalation for internal support staff to resolve issues. What are we looking for Experience: Overall, 7-9 years total experience in DevOps: AWS (solutioning and operations), GitHub/Bitbucket, CI/CD, Jenkins, ArgoCD, Grafana, Prometheus, etc. Technical Skills To be a part of this journey, you should have 7-9 years of overall industry experience managing production systems, an excellent understanding of key SaaS technologies, and a high level of initiative and responsibility. The following skills are needed for this role. Primary Skills: Public Cloud Providers: AWS: Solutioning, introducing new services in existing infrastructure, and maintaining the infrastructure in a production 24x7 SaaS solution. Administer complex Linux-based web hosting configuration components, including load balancers, web, and database servers. Develop and maintain CI/CD pipelines using GitHub Actions, ArgoCD, and Jenkins. EKS/Kubernetes, ECS, Docker Administration/Deployment. Strong knowledge of AWS networking concepts including: Route53, VPC configuration and management, DHCP, VLANs, HTTP/HTTPS and IPSec/SSL VPNs. Strong knowledge of AWS Security concepts: AWS: IAM accounts, KMS managed encryption, CloudTrail, CloudWatch monitoring/alerting. Automating existing manual workload like reporting, patching/updating servers by writing scripts, lambda functions, etc. Expertise in Infrastructure as Code technologies: Terraform is a must. Monitoring and alerting tools like Prometheus, Grafana, PagerDuty, etc. Expertise in Windows and Linux OS is a must. Secondary Skills: It would be advantageous if the candidate also has the following secondary skills: Strong knowledge of scripting/coding with Go, PowerShell, Bash, or Python . Soft Skills: Strong written and verbal communication skills directed to technical and non-technical team members. Willingness to take ownership of problems and seek solutions. Ability to apply creative problem solving and manage through ambiguity. Ability to work under remote supervision and with a minimum of direct oversight. Qualification Bachelors degree in computer science, Engineering, or a related field. Proven experience as a DevOps Engineer with a focus on AWS. Experience with modernizing legacy applications and improving deployment processes. Excellent problem-solving skills and the ability to work under remote supervision. Strong written and verbal communication skills, with the ability to articulate technical information to non-technical team members.

Posted 1 month ago

Apply

3.0 - 7.0 years

15 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Role & responsibilities Design, implement, and maintain AWS infrastructure using Terraform (IaC Infrastructure as Code). Manage CI/CD pipelines and automate operational tasks using tools like Jenkins, GitHub Actions, or CodePipeline. Monitor infrastructure health using CloudWatch, Prometheus, Grafana, etc., and handle alerting with PagerDuty or similar tools. Implement and maintain backup, disaster recovery, and high availability strategies in AWS. Manage VPCs, subnets, routing, security groups, and IAM roles and policies. Perform cost optimization and rightsizing of AWS resources. Ensure security compliance and apply cloud security best practices (e.g., encryption, access control). Collaborate with development and security teams to support application deployment and governance. Preferred candidate profile 3+ years of hands-on experience in AWS Cloud (EC2, S3, IAM, RDS, Lambda, EKS/ECS, VPC, etc.). 2+ years experience with Terraform and strong understanding of IaC principles. Hands-on experience with Linux system administration and scripting (Bash, Python). Experience with DevOps tools such as Git, Docker, Jenkins, or similar. Proficiency in monitoring/logging tools like CloudWatch, ELK stack, Datadog, or New Relic. Familiarity with incident management, change management, and postmortem analysis processes. Knowledge of networking, DNS, TLS/SSL, firewalls, and cloud security concepts.

Posted 1 month ago

Apply

8.0 - 12.0 years

25 - 27 Lacs

Bengaluru

Work from Office

About the Role As DevOps Engineer - IV, you will design systems capable of serving as the brains of complex distributed products. In addition, you will also closely mentor younger engineers on the team and contribute to team building.Overall, you will be a strong technologist at Meesho who cares about code modularity, scalability, re-usability. What you will do Develop reusable Infrastructure code and testing frameworks for Infrastructure. Develop tools and frameworks to allow Meesho engineers to provision and manage Infrastructure access controls. Design and develop solutions for cloud security, secrets-management and key rotations. Design a centralized logging and metrics platform that can handle Meeshos scale. Take on new Infrastructure requirements and develop infrastructure code Work with service teams to help them onboard container platform. Scale the Meesho platform to handle millions of requests concurrently. Drive solutions to reduce MTTR and MTTD, enabling High Availability and Disaster Recovery. What you will need Must Have : Bachelors / Masters in Computer Science 8-12 years of in-depth and hands-on professional experience in DevOps /Systems Engineering domain Proficiency in Strong Systems, Linux, Open Source, Infra Structure Engineering, DevOps fundamentals. Proficiency on container platforms like Docker, Kubernetes, EKS/GKE, etc.Exceptional design and architectural skills Experience in building large scale distributed systems Experience in Scalable Systems - transactional systems (B2C) Expertise in Capacity Planning Design, cost and effort estimations and cost-optimisation Ability to deliver the best operations tooling and practices, including CI/CD In-depth understanding of SDLC Ability to write infrastructure as code for public or private clouds Ability to implement modern cloud Integration architecture Knowledge of configuration and infra management (Terraform) or CI tools (Any) Knowledge of coding languagePython, Go (proficiency in any one). Ability to architect and implement end-to-end monitoring of solutions in the cloud Ability to design for failover, high availability, MTTR, MTTD, RTO, RPO and so on Good to have : Good to have hands on experience on data processing frameworks(eg. Spark, Databricks) Familiar with Big Data Technologies. Experience with DataOps concepts and tools(eg. Airflow, Zeplin) Expertise in Security Hardening of cloud infrastructure application/web server against known/unknown vulnerabilities Understanding of compliance and security Ability to assess business needs and requirements to ensure appropriate approaches Ability to define and report on business and processes metrics Ability to balance governance, ownership and freedom against reliability Ability to develop and motivate individual contributors on the team

Posted 1 month ago

Apply

6.0 - 11.0 years

20 - 22 Lacs

Hyderabad

Hybrid

Hi, We are hiring for the below requirement, for one of the top MNC also under their direct payroll. Please read the JD and interested candidate please drop your application in payel.chowdhury@in.experis.com - Job Title: DevOps Engineer Location: Only Hyderabad Employment Type: Full-Time Experience Level: 6+ Years Notice Period : Immediate Joiners only Job Summary: We are looking for a skilled and motivated DevOps Engineer to join our growing technology team. The ideal candidate will have hands-on experience with AWS cloud services , strong knowledge of CI/CD pipelines , and be proficient in tools such as GitLab, Kubernetes, Terraform, Jenkins , and scripting languages like Python, Bash, or Shell . You will play a key role in automating infrastructure, improving deployment processes, and ensuring system reliability and scalability. Key Responsibilities: Design, implement, and manage scalable and secure infrastructure on AWS . Develop and maintain CI/CD pipelines using GitLab CI, Jenkins, or similar tools . Automate infrastructure provisioning using Terraform and Infrastructure as Code (IaC) best practices. Manage and monitor Kubernetes clusters for container orchestration and workload management. Write and maintain automation scripts using Python , Shell , or Bash to streamline DevOps processes. Collaborate with development and QA teams to ensure smooth code deployments and operational efficiency. Implement monitoring, logging, and alerting solutions to proactively identify issues. Troubleshoot production issues and perform root cause analysis. Ensure system security through access controls, firewalls, and other policies. Stay current with industry trends and emerging technologies to continuously improve the DevOps toolchain. Required Skills & Qualifications: 3+ years of experience as a DevOps Engineer or similar role. Strong hands-on experience with Amazon Web Services (AWS) including EC2, S3, VPC, IAM, RDS, Lambda, CloudWatch, and ECS/EKS. Proficiency in CI/CD tools such as GitLab CI/CD , Jenkins , or similar. Experience with Kubernetes for container orchestration and Docker for containerization. Expertise in Terraform for infrastructure automation and provisioning. Strong scripting skills in Python , Shell , or Bash . Solid understanding of Git workflows and version control best practices. Experience with monitoring tools like Prometheus, Grafana, ELK Stack, or CloudWatch. Familiarity with Agile methodologies and DevOps culture. Preferred Qualifications: AWS certifications such as AWS Certified DevOps Engineer or Solutions Architect . Experience with other DevOps tools such as Ansible, Helm, or ArgoCD. Familiarity with security and compliance frameworks in cloud environments.

Posted 1 month ago

Apply

4.0 - 8.0 years

6 - 10 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

About the Role: Grade Level (for internal use): 10 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team, you will design, build, and optimize enterprisescale data extraction, automation, and ML model deployment pipelines that power data sourcing and information retrieval solutions for S&P Global. You will help define architecture standards, mentor junior engineers, and champion best practices in an AWS-based ecosystem. Youll lead by example in a highly engaging, global environment that values thoughtful risk-taking and self-initiative. Whats in it for you: Drive solutions at enterprise scale within a global organization Collaborate with and coach a hands-on, technically strong team (including junior and mid-level engineers) Solve high-complexity, high-impact problems from end to end Shape the future of our data platform-build, test, deploy, and maintain production-ready pipelines Responsibilities: Architect, develop, and operate robust data extraction and automation pipelines in production Integrate, deploy, and scale ML models within those pipelines (real-time inference and batch scoring) Lead full lifecycle delivery of complex data projects, including: Designing cloud-native ETL/ELT and ML deployment architectures on AWS (EKS/ECS, Lambda, S3, RDS/DynamoDB) Implementing and maintaining DataOps processes with Celery/Redis task queues, Airflow orchestration, and Terraform IaC Establishing and enforcing CI/CD pipelines on Azure DevOps (build, test, deploy, rollback) with automated quality gates Writing and maintaining comprehensive test suites (unit, integration, load) using pytest and coverage tools Optimize data quality, reliability, and performance through monitoring, alerting (CloudWatch, Prometheus/Grafana), and automated remediation Define-and continuously improve-platform standards, coding guidelines, and operational runbooks Conduct code reviews, pair programming sessions, and provide technical mentorship Partner with data scientists, ML engineers, and product teams to translate requirements into scalable solutions, meet SLAs, and ensure smooth hand-offs Technical : 4-8 years' hands-on experience in data engineering, with proven track record on critical projects Expert in Python for building extraction libraries, RESTful APIs, and automation scripts Deep AWS expertiseEKS/ECS, Lambda, S3, RDS/DynamoDB, IAM, CloudWatch, and Terraform Containerization and orchestrationDocker (mandatory) and Kubernetes (advanced) Proficient with task queues and orchestration frameworksCelery, Redis, Airflow Demonstrable experience deploying ML models at scale (SageMaker, ECS/Lambda endpoints) Strong CI/CD background on Azure DevOps; skilled in pipeline authoring, testing, and rollback strategies Advanced testing practicesunit, integration, and load testing; high coverage enforcement Solid SQL and NoSQL database skills (PostgreSQL, MongoDB) and data modeling expertise Familiarity with monitoring and observability tools (e.g., Prometheus, Grafana, ELK stack) Excellent debugging, performance-tuning, and automation capabilities Openness to evaluate and adopt emerging tools, languages, and frameworks Good to have: Master's or Bachelor's degree in Computer Science, Engineering, or a related field Prior contributions to open-source projects, GitHub repos, or technical publications Experience with infrastructure as code beyond Terraform (e.g., CloudFormation, Pulumi) Familiarity with GenAI model integration (calling LLM or embedding APIs) Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.1 - Middle Professional Tier I (EEO Job Group)

Posted 1 month ago

Apply

4.0 - 8.0 years

13 - 17 Lacs

Bengaluru

Work from Office

FICO (NYSEFICO) is a leading global analytics software company, helping businesses in 100+ countries make better decisions. Join our world-class team today and fulfill your career potential! The Opportunity As a part of FICOs highly modern and innovative analytics and decision platform, the Cyber-Security Engineer will help shape the next generation security for FICOs Platform. You will address cutting edge security challenges in a highly automated, complex, cloud & microservices driven environments inclusive of design challenges and continuous delivery of security functionality and features to the FICO platform as well as the AI/ML capabilities used on top of the FICO platform." VP of Engineering. What Youll Contribute Secure the design of next next-generation FICO Platform, its capabilities, and services. Support full-stack security architecture design from cloud infrastructure to application features for FICO customers. Work closely with product managers, architects, and developers on implementing the security controls within products. Develop and maintain Kyverno policies for enforcing security controls in Kubernetes environments. Collaborate with platform, DevOps, and application teams to define and implement policy-as-code best practices. Contribute to automation efforts for policy deployment, validation, and reporting. Stay current with emerging threats, Kubernetes security features, and cloud-native security tools. Implement required controls and capabilities for the protection of FICO products and environments. Build & validate declarative threat models in a continuous and automated manner. Prepare the product for compliance attestations and ensure adherence to best security practices. Provide expertise as a subject matter expert regarding edge services for public/private cloud information system controls related infrastructure, policy, and decision-making processes. Provide timely resolutions for security configuration or solutions in support of service availability. Work on problems of diverse scope where analysis of situation requires evaluation and troubleshooting including network packet analysis, Linux or Windows DNS, certificates lifecycle, logfile analysis, and related. What Were Seeking Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper (optional but a plus). Experience in threat modeling, code reviews, security testing, vulnerability detection, attacker exploit techniques, and methods for their remediation. Hands-on experience with programming languages, such asJava, Python, etc. Experience of deploying services and securing cloud environments, preferably AWS Experience of deploying and securing containers, container orchestration and mesh technologies (such as EKS, K8S, ISTIO). Experience with Crossplane to manage cloud infrastructure declaratively via Kubernetes. Certifications in Kubernetes or cloud security (e.g., CKA, CKAD, CISSP) are desirable Ability to articulate complex architectural challenges with the business leadership and product management teams. Independently drive transformational security projects across teams and organizations. Experience with securing event streaming platforms like Kafka or Pulsar. Experience with ML/AI model security and adversarial techniques within the analytics domains. Hands-on experience with IaC (Such as Terraform, Cloudformation, Helm) and with CI/CD pipelines (such as Github, Jenkins, JFrog). Resourceful problem-solver skilled at navigating ambiguity and change. Customer-focused individual with strong analytical problem-solving skills and solid communication abilities. Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Global trends toward digital transformation have created tremendous demand for FICOs solutions, placing us among the worlds top 100 software companies by revenue. We help many of the worlds largest banks, insurers, retailers, telecommunications providers and other firms reach a new level of success. Our success is dependent on really talented people just like you who thrive on the collaboration and innovation thats nurtured by a diverse and inclusive environment. Well provide the support you need, while ensuring you have the freedom to develop your skills and grow your career. Join FICO and help change the way business thinks! Learn more about how you can fulfil your potential at www.fico.com/Careers FICO promotes a culture of inclusion and seeks to attract a diverse set of candidates for each job opportunity. We are an equal employment opportunity employer and were proud to offer employment and advancement opportunities to all candidates without regard to race, color, ancestry, religion, sex, national origin, pregnancy, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. Research has shown that women and candidates from underrepresented communities may not apply for an opportunity if they dont meet all stated qualifications. While our qualifications are clearly related to role success, each candidates profile is unique and strengths in certain skill and/or experience areas can be equally effective. If you believe you have many, but not necessarily all, of the stated qualifications we encourage you to apply. Information submitted with your application is subject to theFICO Privacy policy at https://www.fico.com/en/privacy-policy

Posted 1 month ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Bengaluru

Work from Office

The Opportunity As a part of FICOs highly modern and innovative analytics and decision platform, the Cyber-Security Engineer will help shape the next generation security for FICOs Platform. You will address cutting edge security challenges in a highly automated, complex, cloud & microservices driven environments inclusive of design challenges and continuous delivery of security functionality and features to the FICO platform as well as the AI/ML capabilities used on top of the FICO platform." VP, Software Engineering . What Youll Contribute Secure the design of next generation FICO Platform, its capabilities and services. Support full-stack security architecture design from cloud infrastructure to application features for FICO customers. Work closely with product managers, architects and developers on the implementation of the security controls within products. Develop and maintain Kyverno policies for enforcing security controls in Kubernetes environments. Collaborate with platform, DevOps, and application teams to define and implement policy-as-code best practices. Contribute to automation efforts for policy deployment, validation, and reporting. Stay current with emerging threats, Kubernetes security features, and cloud-native security tools. Proof the security implementations within infrastructure & application deployment manifests and the CI/CD pipelines. Implement required controls and capabilities for the protection of FICO products and environments. Build & validate declarative threat models in continuous and automated manner. Prepare the product for compliance attestations and ensure adherence to best security practices. What Were Seeking 5+ years of experience in architecture, security reviews and requirement definition for complex product environments. Familiarity with industry regulations, frameworks, and practices. For example, PCI, ISO 27001, NIST, etc. Strong knowledge and hands-on experience with Kyverno and OPA/Gatekeeper (optional but a plus). Experience in threat modeling, code reviews, security testing, vulnerability detection, attacker exploit techniques, and methods for their remediation. Hands-on experience with programming languages, such asJava, Python, etc. Experience of deploying services and securing cloud environments, preferably AWS Experience of deploying and securing containers, container orchestration and mesh technologies (such as EKS, K8S, ISTIO). Ability to articulate complex architectural challenges with the business leadership and product management teams. Independently drive transformational security projects across teams and organizations. Experience with securing event streaming platforms like Kafka or Pulsar. Experience with ML/AI model security and adversarial techniques within the analytics domains. Hands-on experience with IaC (Such as Terraform, Cloudformation, Helm) and with CI/CD pipelines (such as Github, Jenkins, JFrog). Our Offer to You An inclusive culture strongly reflecting our core valuesAct Like an Owner, Delight Our Customers and Earn the Respect of Others. The opportunity to make an impact and develop professionally by leveraging your unique strengths and participating in valuable learning experiences. Highly competitive compensation, benefits and rewards programs that encourage you to bring your best every day and be recognized for doing so. An engaging, people-first work environment offering work/life balance, employee resource groups, and social events to promote interaction and camaraderie. Why Make a Move to FICO At FICO, you can develop your career with a leading organization in one of the fastest-growing fields in technology today Big Data analytics. Youll play a part in our commitment to help businesses use data to improve every choice they make, using advances in artificial intelligence, machine learning, optimization, and much more. FICO makes a real difference in the way businesses operate worldwide Credit Scoring FICO Scores are used by 90 of the top 100 US lenders. Fraud Detection and Security 4 billion payment cards globally are protected by FICO fraud systems. Lending 3/4 of US mortgages are approved using the FICO Score. Learn more about how you can fulfil your potential at

Posted 1 month ago

Apply

4.0 - 5.0 years

6 - 7 Lacs

Hyderabad

Work from Office

Who are we? CDK Global is the largest technical soltuions provider for the automotive retail industry that is setting the the landscape for automotive dealers, original equipment manufacturers (OEMs) and the customers they serve. As a technology company, we have a significant focus moving our applications to the public cloud and in the process working multiple transformation/modernization Be Part of Something Bigger Each year, more than three percent of the U.S. gross domestic product (GDP) is attributed to the auto industry, which flows through our customer, the auto dealer. Its time you joined an evolving marketplace where research and development investment is measured in the tens of billions. Its time you were a part of something bigger. Were expanding our workforce engineers, architects, developers and more onboarding early adopters who can optimize, pivot and keep pace with ever-evolving development roadmaps and applications. Join Our Team Growth potential, flexibility and material impact on the success and quality of a next-gen, enterprise software product make CDK an excellent choice for those who thrive in challenging, fast-paced engineering environments. The possibilities for impact are endless. We have exceptional opportunities to evolve our industry by driving change through new technology. If youre ready for high-impact, youre ready for CDK. Location: Hyderbad, India Role: Define/Maintain/Implement CDKs Public Clould standards including secrets management, storage, compute, networking, account management, database and operations. Leverage tools like AWS Trusted Advisor, 3rd party Cloud Cost Management tools and scripting to identify and drive cost optimization. This will include working with Application owners to achieve the cost savings. Design and implement Cloud Security Controls that creates guard rails for application teams to work within ensuring proper platform security for applications deployed within the CDK cloud environments. Design/Develop/Implement cloud solutions. Leveraging cloud native services, wrap the appropriate security, automation and service levels to support CDK business needs. Examples of solutions this role will be responsible for developing and supporting are Business Continuity/Backup and Recovery, Identity and Access Management, data services including long term archival, DNS, etc. Develop/maintain/implement cloud platform standards (User Access & Roles, tagging, security/compliance controls, operations management, performance management and configuration management) Responsible for writing and eventual automation of operational run-books for operations. Assist application teams with automating their production support run-books (automate everywhere) Assist application teams when they have issues using AWS services where they are not are fully up to speed in their use. Hands on development of automation solutions to support application teams. Define and maintain minimum application deployment standards (governance, cost management and tech debt) Optimizing and tuning designs based on performance and root cause analysis Analysis of existing solutions alignment to infrastructure standards and providing feedback to both evolve and mature the product solutions and CDK public cloud standards. Essential Duties & Skills: This is a hands-on role where the candidate will take on technical tasks where in depth knowledge on usage and public cloud best practices. Some of the areas within AWS where you will be working include: Compute: EC2, EKS. RDS, Lambda Networking: Load Balancing (ALB/ELB), VPN, Transit Gateways, VPCs, Availablity Zones/Regions Storage: EBS, S3, Archive Services, AWS Backup Security: AWS Config, Cloud Watch, Cloud Trail, Route53, Guard Duty, Detective, Inspector, Security Hub, Secrets Server, KMS, AWS Shield, Security Groups,.AWS Identity and Access Management, etc. Cloud Cost Optimization: Cost Optimizer, Trusted Advisor, Cost Explorer, Harness Cloud Clost Management or equivalent cost management tools. Preferred: Experience with 3rd party SaaS solutions like DataBricks, Snowflake, Confluent Kafka Broad understanding/experience across full stack infrastructure technologies Site Reliablity Engineering practices Github/Artifactory/Bamboo/Terraform Database solutions (SQL/NoSQL) Containerization Solutions (Docker, Kubernetes) DevOps processes and tooling Message queuing, data streaming and caching solutions Networking principles and concepts Scripting and development; preferred Python & Java languages Server based operating systems (Windows/Linux) and Web Services (IIS, Apache) Experience of designing, optimizing and troubleshooting public cloud platforms associated with large, complex application stacks Have clear and concise communication and be comfortable working with at all levels in the organization Capable of managing and prioritize multiple projects with competing resource requirements and timelines Years of Experience: 4-5 yrs+ working in the AWS public cloud environment AWS Solution Architect Professional certification preferred Experience with Infrastructure as code (CloudFormation, Terraform)

Posted 1 month ago

Apply

2.0 - 7.0 years

10 - 20 Lacs

Bengaluru

Hybrid

Senior Cloud Infra Engineer, 5+ yrs, AWS (EC2, EKS, IAM), Terraform, Jenkins, Linux Admin, Grafana, Kibana, DevOps, Kafka, MySQL. Prod ops + scaling exp needed. C2H via TE Infotech(Exotel) Convertible to Permanent, Loc: BLR @ ssankala@toppersedge.com

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Bengaluru, Karnataka, India

On-site

Key Responsibilities : On-Premises Support : Manage and support VMware ESXi environments using vSphere, including installation, configuration, and troubleshooting. Install VMware ESXi hypervisors on physical servers. Configure networking, storage, and resource pools for optimal performance. Set up and manage vCenter Server for centralized management of ESXi hosts. Diagnose and resolve issues related to ESXi host performance, connectivity, and VM operation. Use VMware logs and diagnostic tools to identify problems and implement corrective actions. Perform regular health checks and maintenance to ensure optimal performance. Set up monitoring tools to track performance metrics of VMs and hosts, including CPU, memory, disk I/O, and network usage. Identify bottlenecks and inefficiencies in the infrastructure and take corrective action. Generate reports on system performance for management review. Design and implement backup strategies using VMware vSphere Data Protection or third-party solutions (e.g., Veeam, Commvault). Schedule regular backups of VMs and critical data to ensure data integrity and recoverability. Test backup and restoration processes periodically to verify effectiveness. Will be involved in L1 support on rotation AWS Support : Assist in the design, deployment, and management of AWS infrastructure. Monitor AWS resources, ensuring performance, cost-efficiency, and compliance with best practices. Troubleshoot issues related to AWS services (EC2, S3, RDS, etc.) and provide solutions. Collaborate with development teams to support application deployments in AWS environments. General Responsibilities : Document processes, configurations, and troubleshooting steps for both on-prem and AWS environments. Provide technical support and guidance to internal teams and end-users. Stay current with industry trends and emerging technologies related to cloud and on-prem infrastructure. Participate in on-call rotation and respond to infrastructure-related incidents. Qualifications : Bachelor's degree in Computer Science or related field. 5+ years of experience in infrastructure support, specializing in VMware (ESXi), vSphere, and VxRail. Proven expertise in Linux administration. Proficient in memory, disk, and CPU monitoring and management. In-depth understanding of SAN, NFS, NAS. Thorough knowledge of AWS services (Security/IAM) and architecture. Skilled in scripting and automation tools (PowerShell, Python, AWS CLI). Hands-on experience with containerization concepts. Kubernetes, AWS EKS experience required. Familiarity with networking concepts, security protocols, and best practices. Windows administration preferred Redhat VM / Nutanix Virtualization preferred Strong problem-solving abilities and ability to work under pressure. Excellent communication skills and collaborative mindset. Certifications : VMware Certified Professional (VCP) certification. AWS Certified Solutions Architect or similar certification

Posted 1 month ago

Apply

0.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of a Principal Consultant -AWS Developer! We are looking for candidates who have a passion for cloud with knowledge of different cloud environments. Ideal candidates should have technical experience in AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc . This key role demands a highly motivated individual with a strong background in Computer Science/ Software Engineering. You are meticulous, thorough and possess excellent communication skills to engage with all levels of our stakeholders. A self -starter, you are up-to-speed with the latest developments in the tech world. Responsibilities Hands-On experience & good skills on AWS Platform Services - IAM Role & Policies, Glue, Lamba, EC2, S3, SNS, SQS, EKS, KMS, etc. Must have good working knowledge on Kubernetes & Dockers. Utilize AWS services such as Amazon Glue, Amazon S3, AWS Lambda, and others to optimize performance, reliability, and cost-effectiveness. Design, develop, and maintain AWS-based applications, ensuring high performance, scalability, and security. Integrate AWS services into application architecture, leveraging tools such as Lambda, API Gateway, S3, DynamoDB, and RDS. Collaborate with DevOps teams to automate deployment pipelines and optimize CI/CD practices. Develop scripts and automation tools to manage cloud environments efficiently. Monitor, troubleshoot, and resolve application performance issues. Implement best practices for cloud security, data management, and cost optimization. Participate in code reviews and provide technical guidance to junior developers. Qualifications we seek in you! Minimum Qualifications / Skills experience in software development with a focus on AWS technologies. Proficiency in AWS services such as EC2, Lambda, S3, RDS, and DynamoDB. Strong programming skills in Python, Node.js, or Java. Experience with RESTful APIs and microservices architecture. Familiarity with CI/CD tools like Jenkins, GitLab CI, or AWS CodePipeline . Knowledge of infrastructure as code using CloudFormation or Terraform. Problem-solving skills and the ability to troubleshoot application issues in a cloud environment. Excellent teamwork and communication skills. Preferred Qualifications/ Skills AWS Certified Developer - Associate or AWS Certified Solutions Architect - Associate. Experience with serverless architectures and API development. Familiarity with Agile development practices. Knowledge of monitoring and logging solutions like CloudWatch and ELK Stack. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

5.0 - 9.0 years

3 - 7 Lacs

Noida

Work from Office

We are looking for a skilled Python Developer with 5 to 9 years of experience to design, develop, and maintain serverless applications using Python and AWS technologies. The ideal candidate will have extensive experience in building scalable, high-performance back-end systems and a deep understanding of AWS serverless services such as Lambda, DynamoDB, SNS, SQS, S3, and others. This role is based in Bangalore and Mumbai. Roles and Responsibility Design and implement robust, scalable, and secure back-end services using Python and AWS serverless technologies. Build and maintain serverless applications leveraging AWS Lambda, DynamoDB, API Gateway, S3, SNS, SQS, and other AWS services. Provide technical leadership and mentorship to a team of engineers, promoting best practices in software development, testing, and DevOps. Collaborate closely with cross-functional teams including front-end developers, product managers, and DevOps engineers to deliver high-quality solutions that meet business needs. Implement and manage CI/CD pipelines, automated testing, and monitoring to ensure high availability and rapid deployment of services. Optimize back-end services for performance, scalability, and cost-effectiveness, ensuring the efficient use of AWS resources. Ensure all solutions adhere to industry best practices for security, including data protection, access controls, and encryption. Create and maintain comprehensive technical documentation, including architecture diagrams, API documentation, and deployment guides. Diagnose and resolve complex technical issues in production environments, ensuring minimal downtime and disruption. Stay updated with the latest trends and best practices in Python, AWS serverless technologies, and fintech/banking technology stacks, and apply this knowledge to improve our systems. Job Minimum 7 years of experience in back-end software development, with at least 5 years of hands-on experience in Python. Extensive experience with AWS serverless technologies, including Lambda, DynamoDB, API Gateway, S3, SNS, SQS, S3, ECS, EKS, and other related services. Proven experience in leading technical teams and delivering complex, scalable cloud-based solutions in the fintech or banking sectors. Strong proficiency in Python and related frameworks (e.g., Flask, Django). Deep understanding of AWS serverless architecture and best practices. Experience with infrastructure as code (IaC) tools such as AWS CloudFormation or Terraform. Familiarity with RESTful APIs, microservices architecture, and event-driven systems. Knowledge of DevOps practices, including CI/CD pipelines, automated testing, and monitoring using AWS services (e.g., CodePipeline, CloudWatch, X-Ray). Demonstrated ability to lead and mentor engineering teams, fostering a culture of collaboration, innovation, and continuous improvement. Strong analytical and problem-solving skills, with the ability to troubleshoot and resolve complex technical issues in a fast-paced environment. Excellent verbal and written communication skills, with the ability to effectively convey technical concepts to both technical and non-technical stakeholders. Experience with other cloud platforms (e.g., Azure, GCP) and containerization technologies like Docker and Kubernetes. Familiarity with financial services industry regulations and compliance requirements. Relevant certifications such as AWS Certified Solutions Architect, AWS Certified Developer, or similar.

Posted 1 month ago

Apply

8.0 - 10.0 years

2 - 6 Lacs

Noida

Work from Office

We are looking for a skilled FinOps Developer with 8 to 10 years of experience in cloud cost management and FinOps to join our team. The ideal candidate will have a strong background in AWS and Azure cloud platforms. Roles and Responsibility Implement and manage FinOps practices to optimize cloud spend across AWS and Azure. Analyze and optimize resource and compute utilization, ensuring efficient use of cloud services. Develop and maintain cost monitoring dashboards and reports. Collaborate with engineering and finance teams to implement cost governance and forecasting. Identify cost anomalies, provide recommendations, and implement savings plans or reserved instances. Automate cost management processes using scripts and tools. Job Expert knowledge of AWS Cost Explorer, Azure Cost Management, and cloud billing. Strong understanding of resource tagging, budgets, and usage analysis. Experience with cloud monitoring tools (CloudWatch, Azure Monitor). Proficiency in scripting (Python, Bash, PowerShell) for automation. Familiarity with infrastructure as code (Terraform, CloudFormation, ARM Templates). Knowledge of containerized environments (Kubernetes/EKS/AKS) and cost optimization strategies. Hands-on experience in setting up and managing cloud governance frameworks.

Posted 1 month ago

Apply

4.0 - 5.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Java Developer: - What youll be responsible for ? Should perform the Software design and coding, maintenance, performance tuning. Should understand the use cases and implement it Develop the new module as well as support the existing one. Interprets business plans for automation requirements. Ongoing support of existing java project and new development. Creates technical documentation and specifications. Ability to plan, organize, coordinate, and multitask. Excellent communication in English (written & verbal) and interpersonal skills. What you'd have ? 4 - 5 yrs of Experience in developing resilient & scalable distributed systems and microservices architecture. Strong technical background in Core Java, Servlets, XML RDBMS. Experience in developing REST API's using spring boot (or similar frameworks) and webhooks for async communication. Good understanding of async architecture using queues and messaging broker like RabbitMQ, Kafka, etc Deep insights in Java, Garbage Collection Systems, Multi-threading. Experience in container platforms like Docker, Kubernetes. Good understanding of the working of Kubernetes and exp in EKS, GKE, AKS. Significant experience with various open-source tools and frameworks like Spring, hibernate, Apache Camel, Guava Cache, etc. Along with RDBMS, exposure to various no-SQL databases like Mongo, Redis, Click house, Cassandra. Good analytical skills.

Posted 1 month ago

Apply

6.0 - 11.0 years

8 - 13 Lacs

Hyderabad

Work from Office

As a Senior Software Engineer I, you will be a critical member of our technology team, responsible for designing, developing, and deploying scalable software solutions. You will leverage your expertise in Java, ReactJS, AWS, and emerging AI tools to deliver innovative products and services that enhance healthcare outcomes and streamline operations. Primary Responsibilities: Design, develop, test, deploy, and maintain full-stack software solutions leveraging Java, ReactJS, and AWS cloud services Collaborate closely with cross-functional teams, including Product Managers, Designers, Data Scientists, and DevOps Engineers, to translate business requirements into technical solutions Implement responsive UI/UX designs using ReactJS, ensuring optimal performance and scalability Develop robust backend services and APIs using Java and related frameworks (e.g., Spring Boot) Leverage AWS cloud services (e.g., EC2, , S3, Postgres /DynamoDB, ECS, EKS, CloudFormation) to build scalable, secure, and highly available solutions Incorporate AI/ML tools and APIs (such as OpenAI, Claude, Gemini, Amazon AI services) into existing and new solutions to enhance product capabilities Conduct code reviews and adhere to software engineering best practices to ensure quality, security, maintainability, and performance Actively participate in agile methodologies, sprint planning, backlog grooming, retrospectives, and continuous improvement processes Troubleshoot, debug, and resolve complex technical issues and identify root causes to ensure system reliability and performance Document technical solutions, system designs, and code effectively for knowledge sharing and future reference Mentor junior team members, fostering technical growth and engineering excellence Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors Degree or higher in Computer Science, Software Engineering, or related technical discipline 6+ years of hands-on software development experience across the full stack Solid experience developing front-end applications using ReactJS, TypeScript / JavaScript, HTML5, CSS3 Familiarity with AI/ML tools and APIs (such as OpenAI, Claude, Gemini, AWS AI/ML services) and experience integrating them into software solutions Experience with relational and NoSQL databases, along with solid SQL skills Experience in agile development methodologies and CI/CD pipelines Monitoring tools experience like Splunk, Datadog, Dynatrace Solid analytical and problem-solving skills, with the ability to troubleshoot complex technical issues independently Solid proficiency in Java, J2EE, Spring/Spring Boot, and RESTful API design Demonstrable experience deploying and managing applications on AWS (e.g., EC2, S3, Postgres /DynamoDB, RDS, ECS, EKS, CloudFormation) Proven excellent written, verbal communication, and interpersonal skills Preferred Qualifications: Experience in healthcare domain and understanding of healthcare data and workflows Hands-on experience with containerization technologies (Docker, Kubernetes) Experience with performance optimization, monitoring, and logging tools Familiarity with DevOps practices, Infrastructure as Code, and tools like Jenkins, Terraform, Git, and GitHub Actions Exposure to modern architectural patterns such as microservices, serverless computing, and event-driven architecture.

Posted 1 month ago

Apply

2.0 - 4.0 years

10 - 13 Lacs

Bengaluru

Work from Office

About Rippling Rippling gives businesses one place to run HR, IT, and Finance It brings together all of the workforce systems that are normally scattered across a company, like payroll, expenses, benefits, and computers For the first time ever, you can manage and automate every part of the employee lifecycle in a single system Take onboarding, for example With Rippling, you can hire a new employee anywhere in the world and set up their payroll, corporate card, computer, benefits, and even third-party apps like Slack and Microsoft 365?all within 90 seconds Based in San Francisco, CA, Rippling has raised $1 85B from the worlds top investors?including Kleiner Perkins, Founders Fund, Sequoia, Greenoaks, and Bedrock?and was named one of America's best startup employers by Forbes We prioritize candidate safety Please be aware that official communication will only be sent from @Rippling com addresses About The Role The compute infrastructure team takes care of running the application in our prod and non-prod environments, our CI pipeline infrastructure and governance for product team specific infra needs Our primary code base is a python monolith We have home-grown tools & frameworks for running background jobs, deployments and managing cloud infra You have an opportunity to work at framework level that would be used by the developers The team is split across US and India To mention a few tools AWS EKS & ECS, Python, Datadog and terraform What You Will Do Solving problems on deployment infrastructure -in unconventional ways at times Conceive and build tools that make developers efficient and push high-quality code Work on home-grown frameworks enhancements Evolve the deployment infra needs as we refactor and split the monolith Design scalable and robust systems, and make decisions that will keep pace with the rapid growth of Rippling Build relationships with developers across all of our teams to understand their needs and satisfy them with projects you develop What You Will Need 8+ years of professional work experience in cloud infrastructure Experience in build, development, or release systems (e g Git, Jenkins, Chef, Docker) Backend development experience would be a big plus It would help you to work on the homegrown frameworks written in Python You want to be part of a team of the most talented, forward-thinking engineers in the industry

Posted 1 month ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune, Bengaluru

Hybrid

Technical Skills: - 1. Expertise in Amazon EKS and its Components Strong understanding of Kubernetes control plane, worker nodes, add-ons, and how they integrate within EKS. 2. Deep Understanding of Persistent Volume (PV) and Persistent Volume Claim (PVC) – Hands-on experience with dynamic and static provisioning, storage classes, and volume expansion. 3. Strong Knowledge of EKS Networking and Best Practices – Proficiency in CNI (Container Network Interface), VPC CNI, network policies, security groups, and troubleshooting networking issues. 4. EKS Self-Managed AMI Configuration – Experience in custom AMI creation, node upgrades, security hardening, and managing lifecycle policies. 5. EKS Cluster Upgrades and Version Management – Knowledge of Kubernetes version upgrades, compatibility checks, workload impact assessments, and rollback strategies. 6. EKS Access Management, Encryption, Logging, and Monitoring – Expertise in IAM roles for service accounts (IRSA), fine-grained access control using RBAC, audit logging, and monitoring using CloudWatch, Prometheus, and Grafana. 7. Advanced Troubleshooting of EKS Workloads – Identifying and resolving issues related to pod failures, CrashLoopBackOff, OOMKilled, networking errors, node health, and API server issues. 8. Persistent Storage in EKS – Deep knowledge of PV, PVC, different storage classes (gp3, io1, io2), AWS EBS CSI driver, and EFS integration. 9. EKS Core Components and Integrations – Understanding and troubleshooting of key services like IRSA, Pod Identity, External DNS, AWS Load Balancer Controller, CoreDNS, kube-proxy, and Cluster Autoscaler. 10. AWS Service Expertise – Strong familiarity with AWS services such as EC2, RDS, S3, Route 53, IAM, Lambda, DynamoDB, and security best practices. 11. Comprehensive Knowledge of AWS Architecture – Proficiency in designing scalable, high-availability architectures using AWS services and integrating with Kubernetes. 12. Incident Management and Troubleshooting – Experience in diagnosing performance bottlenecks, analysing logs, debugging application failures, and resolving complex issues efficiently. 13. Disaster Recovery and High Availability Strategies – Planning and implementing backup strategies, multi-region deployments, and failover mechanisms for EKS workloads. 14. Security Best Practices for EKS – Implementing pod security policies, OPA/Gatekeeper, KMS encryption, runtime security with Falco, and secret management using AWS Secrets Manager. 15. Scaling and Performance Optimization – Experience in Horizontal Pod Autoscaler (HPA), Cluster Autoscaler 16. Deep Knowledge of ITSM Processes – Proficiency in Incident Management, Problem Management, Change Management, and following ITIL best practices. Very good understanding of Service Now 17. Ability to Handle Major Incidents and Escalations – Efficiently diagnosing critical outages, coordinating with teams, and providing rapid resolutions in high-pressure environments. Functional Skills: - 1. EKS Operations Expertise – Must be ready to perform rotation shifts and on-calls , night shift exception can be given to Team Lead only considering he is an exceptional candidate. 2. Location Constraint – Candidates must be based in Pune or Bangalore only. 3. Hybrid Work Requirement – Mandatory 3 days’ work from office per week. a. Pune Office Location: Cognizant CDC, Hinjewadi Phase 3 b. Bangalore Office Location: Manyata Tech Park, Hebbal

Posted 1 month ago

Apply

6.0 - 9.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications. 1. Applies scientific methods to analyse and solve software engineering problems. 2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance. 3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers. 4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities. 5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Job Description - Grade Specific Is fully competent in it's own area and has a deep understanding of related programming concepts software design and software development principles. Works autonomously with minimal supervision. Able to act as a key contributor in a complex environment, lead the activities of a team for software design and software development. Acts proactively to understand internal/external client needs and offers advice even when not asked. Able to assess and adapt to project issues, formulate innovative solutions, work under pressure and drive team to succeed against its technical and commercial goals. Aware of profitability needs and may manage costs for specific project/work area. Explains difficult concepts to a variety of audiences to ensure meaning is understood. Motivates other team members and creates informal networks with key contacts outside own area. Skills (competencies) Verbal Communication

Posted 1 month ago

Apply

3.0 - 6.0 years

5 - 9 Lacs

Gurugram

Work from Office

Experience : 8-10 years. Job Title : Devops Engineer. Location : Gurugram. Job Summary. We are seeking a highly skilled and experienced Lead DevOps Engineer to drive the design, automation, and maintenance of secure and scalable cloud infrastructure. The ideal candidate will have deep technical expertise in cloud platforms (AWS/GCP), container orchestration, CI/CD pipelines, and DevSecOps practices.. You will be responsible for leading infrastructure initiatives, mentoring team members, and collaborating. closely with software and QA teams to enable high-quality, rapid software delivery.. Key Responsibilities. Cloud Infrastructure & Automation :. Design, deploy, and manage secure, scalable cloud environments using AWS, GCP, or similar platforms.. Develop Infrastructure-as-Code (IaC) using Terraform for consistent resource provisioning.. Implement and manage CI/CD pipelines using tools like Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, AWS CodePipeline, or Azure DevOps.. Containerization & Orchestration :. Containerize applications using Docker for seamless development and deployment.. Manage and scale Kubernetes clusters (on-premise or cloud-managed like AWS EKS).. Monitor and optimize container environments for performance, scalability, and cost-efficiency.. Security & Compliance :. Enforce cloud security best practices including IAM policies, VPC design, and secure secrets management (e.g., AWS Secrets Manager).. Conduct regular vulnerability assessments, security scans, and implement remediation plans.. Ensure infrastructure compliance with industry standards and manage incident response protocols.. Monitoring & Optimization :. Set up and maintain monitoring/observability systems (e.g., Grafana, Prometheus, AWS CloudWatch, Datadog, New Relic).. Analyze logs and metrics to troubleshoot issues and improve system performance.. Optimize resource utilization and cloud spend through continuous review of infrastructure configurations.. Scripting & Tooling :. Develop automation scripts (Shell/Python) for environment provisioning, deployments, backups, and log management.. Maintain and enhance CI/CD workflows to ensure efficient and stable deployments.. Collaboration & Leadership :. Collaborate with engineering and QA teams to ensure infrastructure aligns with development needs.. Mentor junior DevOps engineers, fostering a culture of continuous learning and improvement.. Communicate technical concepts effectively to both technical and non-technical :. Education. Bachelor's degree in Computer Science, Engineering, or a related technical field, or equivalent hands-on : AWS Certified DevOps Engineer Professional (preferred) or other relevant cloud :. 8+ years of experience in DevOps or Cloud Infrastructure roles, including at least 3 years in a leadership capacity.. Strong hands-on expertise in AWS (ECS, EKS, RDS, S3, Lambda, CodePipeline) or GCP equivalents.. Proven experience with CI/CD tools: Jenkins, GitLab CI/CD, GitHub Actions, Bitbucket Pipelines, Azure DevOps.. Advanced knowledge of Docker and Kubernetes ecosystem.. Skilled in Infrastructure-as-Code (Terraform) and configuration management tools like Ansible.. Proficient in scripting (Shell, Python) for automation and tooling.. Experience implementing DevSecOps practices and advanced security configurations.. Exposure to data tools (e.g., Apache Superset, AWS Athena, Redshift) is a plus.. Soft Skills. Strong problem-solving abilities and capacity to work under pressure.. Excellent communication and team collaboration.. Organized with attention to detail and a commitment to Skills :. Experience with alternative cloud platforms (e.g., Oracle Cloud, DigitalOcean).. Familiarity with advanced observability stacks (Grafana, Prometheus, Loki, Datadog).. (ref:hirist.tech). Show more Show less

Posted 1 month ago

Apply

6.0 - 8.0 years

0 - 0 Lacs

Visakhapatnam

Work from Office

Design end-to-end features using React, Spring Boot, and AWS. Build UIs, develop REST APIs, manage data with PostgreSQL/DynamoDB, and handle Kafka flows. Ensure secure, scalable, containerized apps with clean code, testing, and CI/CD pipelines.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies