Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 years
1 - 10 Lacs
Hyderābād
On-site
JOB DESCRIPTION We have an exciting and rewarding opportunity for you to take your software engineering career to the next level. As a Software Engineer III at JPMorgan Chase within the AI/ML & Data Platform team, you serve as a seasoned member of an agile team to design and deliver trusted market-leading technology products in a secure, stable, and scalable way. You are responsible for carrying out critical technology solutions across multiple technical areas within various business functions in support of the firm’s business objectives. Job responsibilities Executes software solutions, design, development, and technical troubleshooting with ability to think beyond routine or conventional approaches to build solutions or break down technical problems Creates secure and high-quality production code and maintains algorithms that run synchronously with appropriate systems Produces architecture and design artifacts for complex applications while being accountable for ensuring design constraints are met by software code development Gathers, analyzes, synthesizes, and develops visualizations and reporting from large, diverse data sets in service of continuous improvement of software applications and systems Proactively identifies hidden problems and patterns in data and uses these insights to drive improvements to coding hygiene and system architecture Contributes to software engineering communities of practice and events that explore new and emerging technologies Adds to team culture of diversity, equity, inclusion, and respect Required qualifications, capabilities, and skills Formal training or certification on software engineering concepts and 3+ years applied experience Hands on experience in Python , AWS & Terraform Experience in developing, debugging, and maintaining code in a large corporate environment with one or more modern programming languages and database querying languages Overall knowledge of the Software Development Life Cycle Solid understanding of agile methodologies such as CI/CD, Application Resiliency, and Security Demonstrated knowledge of software applications and technical processes within a technical discipline (e.g., cloud, artificial intelligence, machine learning, mobile, etc.) Preferred qualifications, capabilities, and skills AWS Solutions Architect / Developer or any advanced level certification preferred Proficiency across the data lifecycle ABOUT US
Posted 4 days ago
5.0 - 10.0 years
0 Lacs
Hyderābād
Remote
OSP India, now part of one.O. OSP India - Hyderabad Private Limited takes a significant step forward in its evolution by becoming part of Otto Group one.O, the new central, high-performance partner for strategy consulting and technology for the Otto Group. This strengthens our mission to deliver innovative IT solutions for commerce and logistics, combining experience, technology, and a global vision to lead the digital future. OSP India’s name transition OSP India will adopt the name Otto Group one.O in the future, following our headquarters' rebranding. We want to assure you that this brand name change will not affect your role, job security, or our company culture. This transition aligns us with our global teams in Germany, Spain, and Taiwan and enhances our collaboration moving forward Job Overview:We are seeking a skilled and experienced Senior Python Developer with strong expertise in Google Cloud Platform (GCP), data analytics, and infrastructure automation using Terraform. The candidate will play a key role in building scalable data solutions, implementing secure cloud services, and driving actionable insights through analytics. Requirements Design, develop, and maintain scalable Python applications for data processing and analytics. Utilize GCP services including IAM, BigQuery, Cloud Storage, Pub/Sub, and others. Build and maintain robust data pipelines and ETL workflows. Collaborate with analytics teams to transform data into valuable business insights. Develop and manage infrastructure using Terraform (IaC). Ensure data quality, integrity, and governance across all systems. Participate in architectural decisions, code reviews, and agile ceremonies. Required Skills and Experience: 5 to 10 years of hands-on experience with Python development. Proficient in the Pandas library for data manipulation and analysis. Strong working knowledge of GCP (especially IAM, BigQuery, Cloud Functions, Cloud Storage). Experience in designing and deploying data analytics and ETL pipelines. Proficiency with Terraform and Infrastructure as Code practices. Strong analytical skills and ability to derive insights from large datasets. Understanding of best practices around data quality, security, and compliance. Preferred Qualifications: Experience with MongoDB is an advantage. GCP Certification (e.g., Professional Data Engineer or Cloud Architect). Familiarity with Docker and Kubernetes. Exposure to CI/CD tools and DevOps practices. Strong collaboration and communication skills. Benefits Flexible Working Hours: Support for work-life balance through adaptable scheduling. Comprehensive Medical Insurance: Coverage for employees and families, ensuring access to quality healthcare. Hybrid Work Model: Blend of in-office collaboration and remote work opportunities, with four days a week in the office.
Posted 4 days ago
2.0 years
6 - 8 Lacs
India
On-site
About the Role We are looking for a DevOps Engineer to build and maintain scalable, secure, and high- performance infrastructure for our next-generation healthcare platform. You will be responsible for automation, CI/CD pipelines, cloud infrastructure, and system reliability, ensuring seamless deployment and operations. Responsibilities 1. Infrastructure & Cloud Management Design, deploy, and manage cloud-based infrastructure (AWS, Azure, GCP) Implement containerization (Docker, Kubernetes) and microservices orchestration Optimize infrastructure cost, scalability, and performance 2. CI/CD & Automation Build and maintain CI/CD pipelines for automated deployments Automate infrastructure provisioning using Terraform, Ansible, or CloudFormation Implement GitOps practices for streamlined deployments 3. Security & Compliance Ensure adherence to ABDM, HIPAA, GDPR, and healthcare security standards Implement role-based access controls, encryption, and network security best practices Conduct Vulnerability Assessment & Penetration Testing (VAPT) and compliance audits 4. Monitoring & Incident Management Set up monitoring, logging, and alerting systems (Prometheus, Grafana, ELK, Datadog, etc.) Optimize system reliability and automate incident response mechanisms Improve MTTR (Mean Time to Recovery) and system uptime KPIs 5. Collaboration & Process Improvement Work closely with development and QA teams to streamline deployments Improve DevSecOps practices and cloud security policies Participate in architecture discussions and performance tuning Required Skills & Qualifications 2+ years of experience in DevOps, cloud infrastructure, and automation Hands-on experience with AWS and Kubernetes Proficiency in Docker and CI/CD tools (Jenkins, GitHub Actions, ArgoCD, etc.) Experience with Terraform, Ansible, or CloudFormation Strong knowledge of Linux, shell scripting, and networking Experience with cloud security, monitoring, and logging solutions Nice to Have Experience in healthcare or other regulated industries Familiarity with serverless architectures and AI-driven infrastructure automation Knowledge of big data pipelines and analytics workflows What You'll Gain Opportunity to build and scale a mission-critical healthcare infrastructure Work in a fast-paced startup environment with cutting-edge technologies Growth potential into Lead DevOps Engineer or Cloud Architect roles Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Schedule: Day shift Morning shift Work Location: In person Speak with the employer +91 9575285285
Posted 4 days ago
5.0 years
0 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a highly skilled and experienced Senior Cloud Native Developer to join our team and drive the design, development, and delivery of cutting-edge cloud-based solutions on Google Cloud Platform (GCP). This role emphasizes technical expertise, best practices in cloud-native development, and a proactive approach to implementing scalable and secure cloud solutions. Responsibilities Design, develop, and deploy cloud-based solutions using GCP, adhering to architecture standards and best practices Code and implement Java applications using GCP Native Services like GKE, CloudRun, Functions, Firestore, CloudSQL, and Pub/Sub Select appropriate GCP services to address functional and non-functional requirements Demonstrate deep expertise in GCP PaaS, Serverless, and Database services Ensure compliance with security and regulatory standards across all cloud solutions Optimize cloud-based solutions to enhance performance, scalability, and cost-efficiency Stay updated on emerging cloud technologies and trends in the industry Collaborate with cross-functional teams to architect and deliver successful cloud implementations Leverage foundational knowledge of GCP AI services, including Vertex AI, Code Bison, and Gemini models when applicable Requirements 5+ years of extensive experience in designing, implementing, and maintaining applications on GCP Comprehensive expertise in using GCP services, including GKE, CloudRun, Functions, Firestore, Firebase, and Cloud SQL Knowledge of advanced GCP services, such as Apigee, Spanner, Memorystore, Service Mesh, Gemini Code Assist, Vertex AI, and Cloud Monitoring Solid understanding of cloud security best practices and expertise in implementing security controls in GCP Proficiency in cloud architecture principles and best practices, with a focus on scalable and reliable solutions Experience with automation and configuration management tools, particularly Terraform, along with a strong grasp of DevOps principles Familiarity with front-end technologies like Angular or React Nice to have Familiarity with GCP GenAI solutions and models, including Vertex AI, Codebison, and Gemini models Background in working with front-end frameworks and technologies to complement back-end cloud development Capability to design end-to-end solutions integrating modern AI and cloud technologies We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
10.0 years
20 - 25 Lacs
Gurgaon
On-site
DevOps Architect / Senior DevOps Engineer Experience: 10+Years Location: Noida Employment Type: Full-Time Job Summary: We are seeking a highly skilled and experienced DevOps Architect / Senior DevOps Engineer with 10+ years of expertise in designing, implementing, and managing robust DevOps ecosystems across AWS , Azure , and GCP . The ideal candidate will possess a deep understanding of cloud infrastructure, automation, CI/CD pipelines, container orchestration, and infrastructure as code. This role is both strategic and hands-on—driving innovation, scalability, and operational excellence in cloud-native environments. Key Responsibilities: Architect and manage DevOps solutions across multi-cloud platforms (AWS, Azure, GCP) . Build and optimize CI/CD pipelines and release management processes. Define and enforce cloud-native best practices for scalability, reliability, and security. Design and implement Infrastructure as Code (IaC) using tools like Terraform , Ansible , CloudFormation , or ARM templates . Deploy and manage containerized applications using Docker and Kubernetes . Implement monitoring, logging, and alerting frameworks (e.g., ELK, Prometheus, Grafana, CloudWatch). Drive automation initiatives and eliminate manual processes across environments. Collaborate with development, QA, and operations teams to integrate DevOps culture and workflows. Lead cloud migration and modernization projects. Ensure compliance, cost optimization, and governance across environments. Required Skills & Qualifications: 10+years of experience in DevOps / Cloud / Infrastructure / SRE roles. Strong expertise in at least two major cloud platforms ( AWS , Azure , GCP ) with working knowledge of the third. Advanced knowledge of Docker , Kubernetes , and container orchestration. Deep understanding of CI/CD tools (e.g., Jenkins, GitLab CI, Azure DevOps, ArgoCD). Hands-on experience with IaC tools : Terraform, Ansible, Pulumi, etc. Proficiency in scripting languages like Python , Shell , or Go . Strong background in networking , cloud security , and cost optimization . Experience with DevSecOps and integrating security into DevOps practices. Bachelor's/Master's degree in Computer Science, Engineering, or related field. Relevant certifications preferred (e.g., AWS DevOps Engineer, Azure DevOps Expert, Google Professional DevOps Engineer). Preferred Skills: Multi-cloud or hybrid cloud experience. Exposure to service mesh , API gateways , and serverless architectures . Familiarity with GitOps , policy-as-code , and site reliability engineering (SRE) principles. Experience in high-availability, disaster recovery, and compliance (SOC2, ISO, etc.). Agile/Scrum or SAFe experience in enterprise environments. Job Type: Full-time Pay: ₹2,000,000.00 - ₹2,500,000.00 per year Ability to commute/relocate: Gurgaon, Haryana: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 10 years (Required) Work Location: In person Speak with the employer +91 8580563551
Posted 4 days ago
7.0 - 12.0 years
1 - 6 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are looking for an experienced and motivated Lead Software Engineer with expertise in Python and ReactJS to oversee the development of high-quality, scalable applications and lead a team of talented developers. This role requires a strong technical background, leadership skills, and a commitment to driving innovative solutions aligned with business needs. Responsibilities Collaborate with stakeholders to gather requirements, create technical designs, and align solutions with business goals Ensure code quality and performance benchmarks through technical reviews, including code reviews and design discussions Drive architecture decisions and ensure implementation of best practices across the development lifecycle Provide mentorship to team members by sharing expertise, insights, and professional guidance Develop and maintain efficient, sustainable, and scalable applications in Python and ReactJS Implement UI/UX designs with React, leveraging frameworks such as Material UI to create functional and visually appealing interfaces Oversee cloud infrastructure setup, ensuring efficient deployments and maintenance using technologies like Terraform and ArgoCD Facilitate pipeline automation and continuous delivery processes with tools like ADO Pipelines and GitHub Actions Collaborate cross-functionally with QA, product management, and DevOps teams to maintain project timelines and quality benchmarks Requirements Background with 7-12 years of professional software engineering experience Proficiency in Python for application development and problem-solving Expertise in ReactJS and experience with Material UI frameworks for UI/UX development Competency in Javascript frameworks and TypeScript for creating reliable and scalable solutions Hands-on experience with cloud infrastructure tools such as Terraform and ArgoCD Skills in leveraging continuous delivery tools like ADO Pipelines and GitHub Actions to streamline deployment processes Nice to have Familiarity with Agile development practices and methodologies to enhance team collaboration Understanding of modern development trends and emerging technologies in web and cloud computing Capability to manage and optimize large-scale, distributed systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
10.0 years
2 - 7 Lacs
Gurgaon
On-site
About the Role We are seeking a skilled L2/L3 System Engineer with strong expertise in Cloud technologies and Terraform to join our team. The ideal candidate will have deep experience in Landing Zones, AWS (preferred), Windows, and an appreciation for Azure and Linux environments. This role is pivotal in ensuring the stability, scalability, and security of our cloud infrastructure, while supporting operational excellence. Key Responsibilities Design, implement, and maintain Cloud Landing Zones for secure and scalable infrastructure. Minimum 10+ years of technical exp. Develop and manage Infrastructure as Code (IaC) using Terraform. Provide L2/L3 technical support for cloud environments, ensuring high availability and performance. Troubleshoot complex infrastructure issues across AWS, Windows, and Azure/Linux systems. Automate cloud deployments and configuration management processes. Collaborate with cross-functional teams to optimize cloud solutions based on best practices. Monitor system performance, security, and compliance requirements. Support migrations and enhancements of existing cloud environments. Required Qualifications Strong cloud knowledge, with a preference for AWS, though Azure and Linux expertise is also valued. Expertise in Terraform for infrastructure automation. Experience in Landing Zone architectures and best practices. Advanced troubleshooting skills across Windows-based systems and familiarity with Linux/Azure environments. Knowledge of networking, security, and cloud governance principles. Ability to work in a fast-paced, agile environment and collaborate effectively with technical and non-technical teams. Preferred Qualifications Master’s or Bachelor’s degree in Computer Science, Information Technology, or a related field. Certifications in AWS, Azure, Terraform, or relevant cloud technologies. If you’re passionate about cloud technologies, automation, and delivering high-quality system solutions, we encourage you to apply! Globally, our policy is to recruit individuals from wide and diverse backgrounds. However, certain positions require access to controlled goods and technologies subject to the International Traffic in Arms Regulations (ITAR) or Export Administration Regulations (EAR). Applicants for these positions may need to be “U.S. persons.” “U.S. persons” are generally defined as U.S. citizens, noncitizen nationals, lawful permanent residents (or, green card holders), individuals granted asylum, and individuals admitted as refugees. MKS Instruments, Inc. and its affiliates and subsidiaries (“MKS”) is an affirmative action and equal opportunity employer: diverse candidates are encouraged to apply. We win as a team and are committed to recruiting and hiring qualified applicants regardless of race, color, national origin, sex (including pregnancy and pregnancy-related conditions), religion, age, ancestry, physical or mental disability or handicap, marital status, membership in the uniformed services, veteran status, sexual orientation, gender identity or expression, genetic information, or any other category protected by applicable law. Hiring decisions are based on merit, qualifications and business needs. We conduct background checks and drug screens, in accordance with applicable law and company policies. MKS is generally only hiring candidates who reside in states where we are registered to do business. MKS is committed to working with and providing reasonable accommodations to qualified individuals with disabilities. If you need a reasonable accommodation during the application or interview process due to a disability, please contact us at: accommodationsatMKS@mksinst.com . If applying for a specific job, please include the requisition number (ex: RXXXX), the title and location of the role
Posted 4 days ago
0 years
3 - 9 Lacs
Gurgaon
On-site
Senior DevOps Engineer (L3) As a Senior DevOps Engineer at Spring, you are a strategic leader and technical expert responsible for designing, building, and scaling our infrastructure platforms and delivery systems. You help set the standard for reliability, observability, cost-efficiency, and velocity across engineering — and play a critical role in enabling Spring to scale securely and sustainably. You own cross-functional infrastructure initiatives that support new products, integrations, or compliance requirements. You contribute to the long-term evolution of Spring’s platform architecture and collaborate closely with engineering, product, and business leadership to ensure infrastructure decisions support strategic objectives. You mentor other engineers, lead technical design efforts, and serve as a go-to partner for high-risk or business-critical systems. At this level, you’re expected to deeply understand how Spring’s technical architecture supports our business. You know how key services interact across domains — from payment reconciliation to identity verification to customer communications. You anticipate and mitigate systemic risks, lead readiness reviews, and help shape platform roadmaps that balance scale, reliability, and speed. You also collaborate with security, IT, sysadmins, and network teams on strategic concerns like multi-region availability, compliance automation, VPN routing, SOC2 readiness, data loss prevention, and endpoint security. What you’ll do: Lead the design and implementation of scalable, secure, and cost-efficient cloud infrastructure. Own architecture and execution of high-impact platform initiatives (e.g., CD migration, zero-downtime deploys, logging architecture). Collaborate with Security, IT, and Infrastructure teams to define and implement org-wide access, audit, and reliability standards. Proactively identify technical debt, scalability concerns, and risk across multiple systems and services. Guide platform investments and architecture strategy in partnership with engineering and product leadership. Mentor engineers and set best practices in observability, incident response, and infrastructure evolution. Requirements: Deep expertise in a major cloud provider, such as AWS, and Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or CDK. Extensive, hands-on experience with containerization and orchestration technologies, such as Docker, Kubernetes (and its variants like EKS), and Helm. Strong proficiency in a programming language like Python or Go for creating robust automation and tooling. Deep expertise in designing and managing CI/CD pipelines, with experience in modern practices like GitOps using tools such as ArgoCD or Flux. Expert-level knowledge of Linux/UNIX systems administration, version control (Git), and internals. Advanced understanding of networking (VPC design, DNS, routing, service mesh) and cloud security (IAM, threat modeling, compliance automation). Deep understanding of observability, monitoring, and alerting using tools such as the Prometheus stack (Prometheus, Grafana, Loki) and/or commercial platforms like Datadog. Proven experience leading infrastructure strategy and operating high-availability production systems. Strong leadership, cross-functional influence, and a business-first mindset when making technical trade-offs.
Posted 4 days ago
8.0 years
6 - 9 Lacs
Gurgaon
On-site
- 8+ years’ experience with strong hands-on skills in C#, ASP.NET, Web API, WCF, ADO, ORM (Entity Framework, Dapper), and Containers, 3+ years in cloud architecture. - Experience in migration and modernization of .NET Framework applications to .NET. - Knowledge of various modernization strategies, such as rehosting, replatforming, and refactoring. - Experienced in designing and developing cloud-native applications on AWS (required) and other platforms, proficient in AWS services (EC2, S3, Lambda, CloudFormation), agile development, and automated CI/CD pipelines. - Bachelor’s degree or equivalent in IT, Computer Science, Math, Physics, or related fields, with strong communication skills for diverse audiences. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries.. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer) with knowledge of Microsoft modernization tools like Microsoft Upgrade Assistant and AWS Transform (.NET). Experienced with Generative AI coding assistants (Amazon Q Developer, GitHub Copilot), automation/scripting (PowerShell, Terraform, Python), and database migration/modernization. Experience in UI development using JavaScript/TypeScript frameworks such as Angular and React, plus knowledge of security and compliance standards (HIPAA, GDPR). Conducts technical workshops, training, and knowledge-sharing; contributes to blogs or open-source projects; provides technical guidance, best practices, and mentorship to teams and customers. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 4 days ago
3.0 - 5.0 years
3 - 7 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled and innovative Systems Engineer with a strong focus on the Google Cloud Platform (GCP) to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining cloud-based infrastructure solutions, ensuring optimal performance and scalability for our ongoing projects. Responsibilities Design, configure, and maintain the GCP environment for the data mesh architecture project Develop infrastructure using an Infrastructure-as-Code approach on GCP Create CI/CD pipelines and automation with deployment models using GitHub Actions Collaborate with cross-functional teams to define cloud infrastructure requirements and ensure scalability, security, and reliability Implement continuous integration and deployment pipelines aligned with DevOps standards Document all aspects of GCP infrastructure and deployment processes Troubleshoot and resolve technical issues or performance inefficiencies on the GCP platform Optimize costs and consistently evaluate GCP resources for better performance Ensure compliance with security policies and recommend improvements where needed Perform regular monitoring, maintenance, and upgrades for cloud infrastructure Requirements 3-5 years of experience working with Google Cloud Platform services, including compute, storage, networking, and security Demonstrated background in designing and implementing scalable cloud infrastructure on GCP Proficiency in DevOps practices, CI/CD workflows, and automation using tools such as GitHub Actions Understanding of Infrastructure-as-Code frameworks such as Terraform or similar tools Strong analytical and problem-solving skills to address complex cloud-related challenges effectively Familiarity with cloud performance monitoring, security best practices, and cost optimization techniques We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
8.0 years
4 - 8 Lacs
Gurgaon
On-site
- 8+ years’ experience in Java/J2EE and 2+ years on any Cloud Platform; Bachelor’s in IT, CS, Math, Physics, or related field. - Strong skills in Java, J2EE, REST, SOAP, Web Services, and deploying on servers like WebLogic, WebSphere, Tomcat, JBoss. - Proficient in UI development using JavaScript/TypeScript frameworks such as Angular and React. - Experienced in building scalable business software with core AWS services and engaging with customers on best practices and project management. The Amazon Web Services Professional Services (ProServe) team is seeking a skilled Delivery Consultant to join our team at Amazon Web Services (AWS). In this role, you'll work closely with customers to design, implement, and manage AWS solutions that meet their technical requirements and business objectives. You'll be a key player in driving customer success through their cloud journey, providing technical expertise and best practices throughout the project lifecycle. Possessing a deep understanding of AWS products and services, as a Delivery Consultant you will be proficient in architecting complex, scalable, and secure solutions tailored to meet the specific needs of each customer. You’ll work closely with stakeholders to gather requirements, assess current infrastructure, and propose effective migration strategies to AWS. As trusted advisors to our customers, providing guidance on industry trends, emerging technologies, and innovative solutions, you will be responsible for leading the implementation process, ensuring adherence to best practices, optimizing performance, and managing risks throughout the project. The AWS Professional Services organization is a global team of experts that help customers realize their desired business outcomes when using the AWS Cloud. We work together with customer teams and the AWS Partner Network (APN) to execute enterprise cloud computing initiatives. Our team provides assistance through a collection of offerings which help customers achieve specific outcomes related to enterprise cloud adoption. We also deliver focused guidance through our global specialty practices, which cover a variety of solutions, technologies, and industries. 10034 Key job responsibilities As an experienced technology professional, you will be responsible for: - Designing and implementing complex, scalable, and secure AWS solutions tailored to customer needs - Providing technical guidance and troubleshooting support throughout project delivery - Collaborating with stakeholders to gather requirements and propose effective migration strategies - Acting as a trusted advisor to customers on industry trends and emerging technologies - Sharing knowledge within the organization through mentoring, training, and creating reusable artifacts About the team Diverse Experiences: AWS values diverse experiences. Even if you do not meet all of the preferred qualifications and skills listed in the job below, we encourage candidates to apply. If your career is just starting, hasn’t followed a traditional path, or includes alternative experiences, don’t let it stop you from applying. Why AWS? Amazon Web Services (AWS) is the world’s most comprehensive and broadly adopted cloud platform. We pioneered cloud computing and never stopped innovating — that’s why customers from the most successful startups to Global 500 companies trust our robust suite of products and services to power their businesses. Inclusive Team Culture - Here at AWS, it’s in our nature to learn and be curious. Our employee-led affinity groups foster a culture of inclusion that empower us to be proud of our differences. Ongoing events and learning experiences, including our Conversations on Race and Ethnicity (CORE) and AmazeCon (gender diversity) conferences, inspire us to never stop embracing our uniqueness. Mentorship & Career Growth - We’re continuously raising our performance bar as we strive to become Earth’s Best Employer. That’s why you’ll find endless knowledge-sharing, mentorship and other career-advancing resources here to help you develop into a better-rounded professional. Work/Life Balance - We value work-life harmony. Achieving success at work should never come at the expense of sacrifices at home, which is why we strive for flexibility as part of our working culture. When we feel supported in the workplace and at home, there’s nothing we can’t achieve in the cloud. AWS experience preferred, with proficiency in EC2, S3, RDS, Lambda, IAM, VPC, CloudFormation, and AWS Professional certifications (e.g., Solutions Architect, DevOps Engineer). Strong scripting and automation skills (Terraform, Python) and knowledge of security/compliance standards (HIPAA, GDPR). Strong communication skills, able to explain technical concepts to both technical and non-technical audiences. Experience in designing, developing, and deploying scalable business software using AWS services like Lambda, Elastic Beanstalk, and Kubernetes. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 4 days ago
2.0 years
5 - 7 Lacs
Gurgaon
On-site
Ahom Technologies Pvt. Ltd. is looking for a skilled and proactive DevOps Engineer to join our growing technology team. The ideal candidate will be responsible for ensuring seamless integration, continuous delivery, and stable deployment of applications across environments. You will collaborate closely with developers, QA, and infrastructure teams to automate processes, monitor performance, and enhance the CI/CD pipeline. Key Responsibilities: Design, implement, and manage CI/CD pipelines for application deployment. Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation . Monitor application performance and availability using tools such as Prometheus, Grafana, Datadog, or ELK Stack . Maintain and manage cloud infrastructure on AWS / Azure / GCP . Ensure security best practices , compliance, and system hardening. Develop and maintain scripts/tools to support infrastructure and deployments. Manage containers and orchestration using Docker and Kubernetes . Troubleshoot and resolve infrastructure, network, and deployment-related issues. Support development teams in implementing automation and DevOps best practices. Requirements: Bachelor’s degree in Computer Science, Engineering, or a related field. 2-5 years of experience as a DevOps Engineer or SRE . Hands-on experience with Linux systems , shell scripting , and Git . Expertise in tools like Jenkins, GitLab CI, Bamboo, or CircleCI . Proficiency with cloud services (AWS/GCP/Azure) and serverless architecture. Experience with Docker , Kubernetes , and container orchestration. Good understanding of networking , security , and load balancing . Excellent problem-solving, documentation, and communication skills. Preferred Qualifications: Knowledge of Agile methodologies and experience working in cross-functional teams. Familiarity with IaC (Infrastructure as Code) and configuration management. Why Work With Us? Exposure to modern DevOps tools and cloud technologies Collaborative and growth-driven environment Flexible work culture and continuous learning support Opportunities to work on enterprise-scale systems *Candidates from Delhi NCR need only apply *Immediate joiners preferred Job Type: Full-time Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Provident Fund Schedule: Day shift Application Question(s): We want to fill this position urgently. Are you an immediate joiner? Do you have experience with Docker, Kubernetes, and container orchestration? Are you proficient in Javascript deployment & networking? Are you proficient into Docker, Nignix & Apache? Work Location: In person Speak with the employer +91 9267985735 Application Deadline: 16/06/2025 Expected Start Date: 23/06/2025
Posted 4 days ago
5.0 years
4 - 6 Lacs
Gurgaon
On-site
Key Responsibilities: Design, implement, and maintain scalable CI/CD pipelines. Manage and monitor cloud infrastructure (AWS, Azure, or GCP). Design and implement scalable and secure cloud infrastructure solutions. Manage and optimize cloud resources to ensure high availability and performance. Monitor cloud environments and implement proactive measures for reliability and cost-efficiency. Collaborate with development and operations teams to support cloud-native applications. Ensure compliance with security standards and implement the best practices in cloud security. Troubleshoot and resolve issues related to cloud infrastructure and services. Implement and maintain container orchestration platforms (e.g., Kubernetes, Docker Swarm). Collaborate with development and QA teams to streamline deployment processes. Ensure system reliability, availability, and performance through monitoring and alerting tools (e.g., Prometheus, Grafana, ELK Stack). Maintain security best practices across infrastructure and deployments. Troubleshoot and resolve issues in development, test, and production environments. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field. 5+ years of experience in DevOps or related roles. Proficiency in scripting languages (e.g., Bash, Python, or Go). Strong experience with CI/CD tools (e.g., Jenkins, GitLab CI, CircleCI). Hands-on experience with containerization and orchestration (Docker, Kubernetes). Experience with infrastructure as code (Terraform, Ansible, or similar). Familiarity with monitoring and logging tools. Excellent problem-solving and communication skills. Preferred Qualifications: Certifications such as AWS Certified DevOps Engineer, CKA/CKAD, or similar. Cloud certifications such as AWS Certified Solutions Architect, Azure Solutions Architect Expert, or Google Cloud Professional Cloud Architect. Experience with serverless architectures and microservices. Familiarity with Agile/Scrum methodologies.
Posted 4 days ago
2.0 years
4 - 8 Lacs
Mohali
On-site
The Role As a DevOps Engineer , you will be an integral part of the product and service division, working closely with development teams to ensure seamless deployment, scalability, and reliability of our infrastructure. You'll help build and maintain CI/CD pipelines, manage cloud infrastructure, and contribute to system automation. Your work will directly impact the performance and uptime of our flagship product, BotPenguin. What you need for this role Education: Bachelor's degree in Computer Science, IT, or a related field. Experience: 2-5 years in DevOps or similar roles. Technical Skills: Proficiency in CI/CD tools like Jenkins, GitLab CI, or GitHub Actions. Experience with containerization and orchestration using Docker and Kubernetes. Strong understanding of cloud platforms, especially AWS & Azure. Familiarity with infrastructure as code tools such as Terraform or CloudFormation. Knowledge of monitoring and logging tools like Prometheus, Grafana, and ELK Stack. Good scripting skills in Bash, Python, or similar languages. Soft Skills: Detail-oriented with a focus on automation and efficiency. Strong problem-solving abilities and proactive mindset. Effective communication and collaboration skills. What you will be doing Build, maintain, and optimize CI/CD pipelines. Monitor and improve system performance, uptime, and scalability. Manage and automate cloud infrastructure deployments. Work closely with developers to support release processes and environments. Implement security best practices in deployment and infrastructure management. Ensure high availability and reliability of services. Document procedures and provide support for technical troubleshooting. Contribute to training junior team members, and assist HR and operations teams with tech-related concerns as required. Top reasons to work with us Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 2 years (Required) Work Location: In person Speak with the employer +91 8319533183
Posted 4 days ago
10.0 years
7 - 8 Lacs
Pune
Remote
TransUnion's Job Applicant Privacy Notice What We'll Bring: We are seeking a Lead Database Engineer to join our DevOps organization. This role is ideal for a seasoned Oracle expert who thrives in complex, mature environments and is passionate about modernization and operational excellence. You will play a critical role in maintaining the health of our on-prem Oracle infrastructure while helping guide our transition toward open-source and cloud-native database technologies. What You'll Bring: Serve as the primary SME for Oracle database troubleshooting , performance tuning, and root cause analysis in a high-availability environment. Lead and execute various DB activities such as database upgrades, patching, and migrations . Design and implement replication, backup, and disaster recovery strategies . Collaborate with DevOps and SRE teams to automate database operations and integrate with CI/CD pipelines. Provide technical leadership and mentorship to junior DBAs and engineers. Partner with application teams to support deployments, schema changes, and performance optimization. Contribute to the strategic roadmap for database modernization, including evaluation and adoption of PostgreSQL and cloud-native solutions. Ensure compliance with security, audit, and data governance requirements. Impact You'll Make: 10+ years of experience as an Oracle DBA in enterprise environments. Proven expertise in troubleshooting and performance tuning of complex Oracle systems. Deep knowledge of Oracle technologies: RAC, DataGuard, ASM, Exadata, Goldengate, CDB/PDB, Grid Control . Strong SQL and PL/SQL skills; experience with scripting (e.g., Bash, Python, or Perl) . Experience with PostgreSQL and/or cloud-native databases (e.g., Aurora, RDS) is a strong plus. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and monitoring platforms. Comfortable working in a Kanban/SAFe Agile environment. Bachelor's degree in Computer Science or related field (or equivalent experience). Oracle Certified Professional (OCP) certification required. This is a remote position which may require occasional in-person attendance at work-related events at the discretion of management. TransUnion Job Title Lead Engineer, Database Engineering
Posted 4 days ago
6.0 years
10 - 24 Lacs
India
On-site
Location: Baner, Pune Experience: 6 to 12 years Looking for Immediate Joiners Only We’re looking for a highly skilled Senior DevOps Engineer to join our team and take charge of designing, deploying, and managing secure, scalable, and efficient cloud infrastructure. You’ll work closely with cross-functional teams to streamline CI/CD pipelines, automate operations, and ensure high availability in production environments. Requirements: 5+ years of hands-on experience in AWS DevOps Worked on CI/CD pipelines using GitLab and/or ArgoCD Strong expertise in Kubernetes, Docker, and container orchestration Proficient in scripting (Python, Bash, PowerShell) Experience with monitoring tools (CloudWatch, Grafana, Prometheus, etc.) Excellent problem-solving and communication skills Key Responsibilities: Design and manage AWS infrastructure (EC2, S3, VPC, IAM, Lambda, EKS, etc.) Automate cloud provisioning and configuration using IaC tools (Terraform/CloudFormation) Optimize CI/CD pipelines using GitLab and ArgoCD Monitor and maintain system performance, availability, and cost-efficiency Troubleshoot infrastructure issues and implement sustainable solutions Ensure cloud security, compliance, and best practices Collaborate across teams and provide guidance on build, deployment, and network-related tasks Stay updated with AWS features and DevOps tools On-call support as needed, including weekends or after-hours Preferred: AWS Certifications Job Type: Full-time Pay: ₹1,000,000.00 - ₹2,400,000.00 per year Application Question(s): What is your Current CTC? What is your Notice Period? What is your Expected CTC? Are you Available for Face to Face interview? Experience: DevOps: 4 years (Required) AWS: 4 years (Required) Kubernetes: 2 years (Required) AWS CloudFormation: 1 year (Required) GitLab: 1 year (Preferred) ArgoCD: 1 year (Preferred) Location: Baner, Pune, Maharashtra (Preferred) Work Location: In person
Posted 4 days ago
3.0 years
8 - 8 Lacs
Pune
On-site
• 3+ years of experience in managing cloud infrastructure, specifically in Azure and Google Cloud • Additional technical cloud experience in one or more of the following platforms: AWS, IBM Cloud, or Oracle Cloud • Strong insight into cloud architecture and services • Proficiency in scripting and automation using tools like PowerShell or Python • Knowledge of security best practices and compliance requirements in cloud environments • Familiarity with networking concepts and technologies (e.g., TCP/IP, DNS, VPN) • Experience with infrastructure-as-code tools such as Terraform or CloudFormation • Experience in driving projects to completion on their own.
Posted 4 days ago
1.0 years
0 Lacs
Navi Mumbai
On-site
As a DevOps Fresher at Arcitech AI, you will play a crucial role in the company's advancements in the industry. This entry-level position offers the opportunity to work on technical challenges and contribute to the professional growth of the company. Key Responsibilities: Assist in configuring and maintaining AWS infrastructure including VPC, EC2, S3, Lambda, IAM, CloudWatch, etc. Support in building and managing CI/CD pipelines using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Write basic Bash/Shell/Python scripts for automation tasks. Learn and work on Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation. Monitor system health and performance using CloudWatch, Grafana, or similar tools. Collaborate with developers and senior DevOps engineers to understand application deployment requirements. Participate in team meetings, daily stand-ups, and knowledge-sharing sessions. Requirements Required Skills: Basic knowledge of AWS services VPC, EC2, S3, Lambda, IAM, CloudWatch, etc. Understanding of Linux/Unix operating systems. Familiar with version control systems like Git, GitHub. Exposure to CI/CD tools (e.g., Jenkins, AWS Codepipeline etc) is a plus. Basic scripting skills (Shell, Bash, or Python). Willingness to learn containerization tools like Docker and orchestration tools like Kubernetes. Qualifications Bachelor's degree in Computer Science or related field 3 months to 1 years of experience in a similar role
Posted 4 days ago
0 years
3 - 8 Lacs
Bengaluru
On-site
Job Summary As an Infra. Technology Specialist you will play a crucial role in managing and optimizing our Azure infrastructure. You will be responsible for ensuring seamless integration and automation of various Azure services contributing to the overall efficiency and security of our cloud environment. This hybrid role requires adaptability to rotational shifts offering a dynamic work experience. Responsibilities Azure DevOps End-to end experience Infrastructure as Code Expertise in any one Bicep ARM templates Terraform and or Ansible. Azure Resources VMs App Services Storage Accounts Event Hub Key Vault RBAC managed identity Azure postgres DNS load balancing virtual networks NSGs etc. Kubernetes AKS Deployments Helm charts monitoring scaling troubleshooting etc. Monitoring & Alerting Setup using Azure Monitor Prometheus Grafana and OpsGenie. CI/CD Pipeline Scripting & Automation Strong grasp of PowerShell Bash or Python for automation. GitHub Code versioning PR management branching strategy. Security best practices security tools like SonarQube blackduck coverity Strong problem solving and troubleshooting skills. Secondary Skills Docker & Docker Compose Containerizing and orchestrating tools like Uptime Kuma Cache PostgreSQL etc. Incident Management Tools On-call rotations alert escalations integration with monitoring tools handling P1 P2 incidents and escalation policies sprint planning Boards Certification Certified Information Systems Security Professional CISSP Certified DevSecOps Engineer are a plus. Certifications Required Microsoft Certified: Azure Solutions Architect Expert
Posted 4 days ago
5.0 - 7.0 years
0 Lacs
Bengaluru
On-site
Application Security — Solution Delivery Lead Deloitte’s Cyber Risk Services help our clients to be secure, vigilant, and resilient in the face of an ever-increasing array of cyber threats and vulnerabilities. Our Cyber Risk practice helps organizations with the management of information and technology risks by delivering end-to-end solutions using proven methodologies and tools in a consistent manner. Our services help organizations to address, in a timely manner, pervasive issues, such as identity theft, data security breaches, data leakage, cyber security, and system outages across organizations of various sizes and industries with the goal of enabling ongoing, secure, and reliable operations across the enterprise. Deloitte’s Cyber Risk Services have been recognized as a leader by a number of independent analyst firms. Kennedy Consulting Research & Advisory, a leading analyst firm, recently named Deloitte a global leader in cyber security consulting. Source: Kennedy Consulting Research & Advisory; Cyber Security Consulting 2013; Kennedy Consulting Research & Advisory estimates © 2013 Kennedy Information, LLC. Reproduced under license. Work you will do As a Senior Consultant in the hybrid operate business, you are responsible for adhering to the defined operating procedures and guidelines in operating the application security services in the Managed Services model, which includes the following: o Understand and be compliant with the Service Level Agreements defined for the DevSecOps services; o Understand and deep knowledge of application security engineering principles, and helping client’s development team and function to follow secure development practices which includes primarily monitoring and performing the security design review, architecture review, threat modeling, security testing, secure code review, secure build processes; o Well versed with the application deployment and configuration baselines, and understanding of how the application environment operates in a secure environment and how exceptions are handled during operations; o Facilitate use of technology-based tools or methodologies to continuously improve the monitoring, management and reliability of the service; o Perform manual and automated security assessment of the applications; o Involved in triaging and defect tracking process with the development team and helping the team to fix issues at the code level based on the priority of the tickets; o Be a liaison between the Application development and infrastructure team, and integrate the processes between infrastructure monitoring and operations processes with the secure development/testing and management processes; o Identifying, researching and analyzing application security events which may include emerging and existing persistent threats to the client's environment; and o Performing active monitoring and tracking of application related threat actors and tactics, techniques and procedures (TTPs), that could likely cause an impact to client organization The team Deloitte’s DevSecOps is a standardized process, to help clients with large development functions, and application dependencies for their day-to-day operations. The process enables the client to address key vulnerabilities and risks associated with their various application environment at different stages of their development lifecycle. At the core of our Application Security Managed Services Team professionals monitors, collects and analyses security related issues on application environment (both at code level and infrastructure level), that may potentially become a threat to an organization. This detection of application threats/vulnerabilities is carried out using a unique blend of our application security testing and monitoring tools and intelligence data collected through our vast experience within the Advice and Implement business. Required: Minimum of 5-7 years’ experience in application security development, security testing, deployment and security management phases; Deep interest in application specific vulnerabilities, code development and infrastructure knowledge; Investigative and analytical problem-solving skills; Experience in collecting, analyzing, and interpreting qualitative and quantitative data from defined application security services related sources (tools, monitoring techniques etc.) Knowledge and experience of OWASP Top 10, SANS Secure Programming, Security Engineering Principles; Hands-on experience in performing code review of dot Net, Java and Swift and objective C code; Hands-on experience in running, installing and managing SAST, DAST , SCA and IAST solutions, such as Checkmarx, Fortify and Contrast in large enterprise Understanding of leading vulnerability scoring standards, such as CVSS, and ability to translate vulnerability severity as security risk; Hands-on experience on at least one CI/CD tool set and building pipelines using Team city, Bamboo, Jenkins, Chef, Puppet, selenium, AWS and AZURE DevOps; Hands on experience on container technology such as Kubernetes, Dockers, AKS, EKS. Knowledge of cloud environments and deployment solutions such as server less computing; Hands on experience in penetration testing of mobile, desktop and web applications; Must have experience in writing custom exploitation scripts and utilities; Possession of excellent oral and written communication skill; Knowledge of one or more scripting languages for automation and complex searches; Must have cloud security specialization in Security; and Certification such as EC-Council CEH (Certified Ethical Hacker), DevSecOps Professional (CDP) , ISC2 Certified Cloud Security Professional (CCSP), Certified API Security Professional (CASP) , CTMP (Certified Threat Modeling Professional) etc. are preferred. Preferred: Bachelor’s in computer science or other technical fields; Experience in cloud service providers such as AWS, GCP, Azure, Oracle are preferred Experience in implementing and managing security measures within Kubernetes environments, designing and enforcing advanced security protocols for API infrastructure, managing and optimizing our containerized applications using Docker, automating and managing our infrastructure as code using Terraform, automating IT processes and configurations using Ansible, and identifying and mitigating potential security threats through comprehensive threat modeling practices. Solid and demonstrable comprehension of Information Security including OWASP/SANS, Security Test Case development (or mis-use case). Understanding of security essentials including; networking concepts, defense strategies, and current security technologies Ability to research and characterize security threats to include identification and classification of application related threat indicators How you will grow At Deloitte, we have invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources, including live classrooms, team-based learning, and eLearning. Deloitte University (DU): The Leadership Center in India, our state-of-the-art, world-class learning center in the Hyderabad office, is an extension of the DU in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people, and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. #CA-LD Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 301449
Posted 4 days ago
3.0 - 5.0 years
3 - 7 Lacs
Bengaluru
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled and innovative Systems Engineer with a strong focus on the Google Cloud Platform (GCP) to join our team. The ideal candidate will be responsible for designing, implementing, and maintaining cloud-based infrastructure solutions, ensuring optimal performance and scalability for our ongoing projects. Responsibilities Design, configure, and maintain the GCP environment for the data mesh architecture project Develop infrastructure using an Infrastructure-as-Code approach on GCP Create CI/CD pipelines and automation with deployment models using GitHub Actions Collaborate with cross-functional teams to define cloud infrastructure requirements and ensure scalability, security, and reliability Implement continuous integration and deployment pipelines aligned with DevOps standards Document all aspects of GCP infrastructure and deployment processes Troubleshoot and resolve technical issues or performance inefficiencies on the GCP platform Optimize costs and consistently evaluate GCP resources for better performance Ensure compliance with security policies and recommend improvements where needed Perform regular monitoring, maintenance, and upgrades for cloud infrastructure Requirements 3-5 years of experience working with Google Cloud Platform services, including compute, storage, networking, and security Demonstrated background in designing and implementing scalable cloud infrastructure on GCP Proficiency in DevOps practices, CI/CD workflows, and automation using tools such as GitHub Actions Understanding of Infrastructure-as-Code frameworks such as Terraform or similar tools Strong analytical and problem-solving skills to address complex cloud-related challenges effectively Familiarity with cloud performance monitoring, security best practices, and cost optimization techniques We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
0 years
0 - 0 Lacs
Bengaluru
On-site
Key Responsibilities Develop and configure Amazon Connect components including contact flows routing profiles queues and agent hierarchies Integrate Connect with AWS Lambda Lex DynamoDB and S3 to support dynamic intelligent IVR solutions Build and manage API integrations with thirdparty systems such as CRMs ticketing platforms and messaging services Participate in designing scalable reliable and secure voice and chat solutions Troubleshoot and optimize performance issues across contact center systems Collaborate with crossfunctional teams to gather requirements and translate them into technical solutions Contribute to code reviews technical documentation and unit testing Support DevOps practices including CICD automation and deployment pipelines Required Skills Solid handson experience with Amazon Connect contact flows and call routing logic Proficiency in AWS Lambda Nodejs or Python CloudWatch and DynamoDB Experience integrating Lex bots for IVR and chat automation Strong understanding of telephony concepts SIP and call queues Knowledge of REST APIs JSON and secure authentication methods OAuth JWT Good debugging problemsolving and analytical skills Familiarity with Agile software development and ticketing tools Jira Confluence Candidate//'s Profile : Experience : 6-8 yea Mandatory Skills : Node.js,React,Amazon Connect Preferred Skills AWS Certification Developer Associate or equivalent Experience with Salesforce Zendesk or ServiceNow integrations Exposure to chatbots voice assistants or conversational AI frameworks Experience with infrastructure as code tools like Terraform or AWS CDK Familiarity with realtime reporting analytics or dashboard tools Experience working in multiregion cloud environments for high availability
Posted 4 days ago
2.0 years
10 - 10 Lacs
Bengaluru
On-site
Location(s): Quay Building 8th Floor, Bagmane Tech Park, Bengaluru, IN Line Of Business: Data Estate(DE) Job Category: Engineering & Technology Experience Level: Experienced Hire At Moody's, we unite the brightest minds to turn today’s risks into tomorrow’s opportunities. We do this by striving to create an inclusive environment where everyone feels welcome to be who they are-with the freedom to exchange ideas, think innovatively, and listen to each other and customers in meaningful ways. If you are excited about this opportunity but do not meet every single requirement, please apply! You still may be a great fit for this role or other open roles. We are seeking candidates who model our values: invest in every relationship, lead with curiosity, champion diverse perspectives, turn inputs into actions, and uphold trust through integrity. Skills and Competencies Proficiency in Kubernetes and Amazon EKS (2+ years required): Essential for managing containerized applications and ensuring high availability and security in cloud-native environments. Strong expertise in AWS serverless technologies (required): Including Lambda, API Gateway, EventBridge, and Step Functions, to build scalable and cost-efficient solutions. Hands-on experience with Terraform (2+ years required): Critical for managing Infrastructure as Code (IaC) across multiple environments, ensuring consistency and repeatability. CI/CD pipeline development using GitHub Actions (required): Necessary for automating deployments and supporting agile development practices. Scripting skills in Python, Bash, or PowerShell (required): Enables automation of operational tasks and enhances infrastructure management capabilities. Experience with Databricks and Apache Kafka (preferred): Valuable for teams working with data pipelines, MLOps workflows, and event-driven architectures. Education Bachelor’s degree in Computer Science or equivalent experience Responsibilities Design, automate, and manage scalable cloud infrastructure using Kubernetes, AWS, Terraform, and CI/CD pipelines . Design and manage cloud-native infrastructure using container orchestration platforms, ensuring high availability, scalability, and security across environments. Implement and maintain Infrastructure as Code (IaC) using tools like Terraform to provision and manage multi-environment cloud resources consistently and efficiently. Develop and optimize continuous integration and delivery (CI/CD) pipelines to automate application and infrastructure deployments, supporting agile development cycles. Monitor system performance and reliability by configuring observability tools for logging, alerting, and metrics collection, and proactively address operational issues. Collaborate with cross-functional teams to align infrastructure solutions with application requirements, ensuring seamless deployment and performance optimization. Document technical processes and architectural decisions through runbooks, diagrams, and knowledge-sharing resources to support operational continuity and team onboarding. About the team Our Data Estate DevOps team is responsible for enabling the scalable, secure, and automated infrastructure that powers Moody’s enterprise data platform. We ensure the seamless deployment, monitoring, and performance of data pipelines and services that deliver curated, high-quality data to internal and external consumers. We contribute to Moody’s by: Accelerating data delivery and operational efficiency through automation, observability, and infrastructure-as-code practices that support near real-time data processing and remediation. Supporting data integrity and governance by enabling traceable, auditable, and resilient systems that align with regulatory compliance and GenAI readiness. Empowering innovation and analytics by maintaining a modular, interoperable platform that integrates internal and third-party data sources for downstream research models, client workflows, and product applications. By joining our team, you will be part of exciting work in cloud-native DevOps, data engineering, and platform automation, supporting global data operations across 29 countries and contributing to Moody’s mission of delivering integrated perspectives on risk and growth. Moody’s is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability, protected veteran status, sexual orientation, gender expression, gender identity or any other characteristic protected by law. Candidates for Moody's Corporation may be asked to disclose securities holdings pursuant to Moody’s Policy for Securities Trading and the requirements of the position. Employment is contingent upon compliance with the Policy, including remediation of positions in those holdings as necessary.
Posted 4 days ago
0 years
2 - 9 Lacs
Bengaluru
On-site
Organization: At CommBank, we never lose sight of the role we play in other people’s financial wellbeing. Our focus is to help people and businesses move forward to progress. To make the right financial decisions and achieve their dreams, targets, and aspirations. Regardless of where you work within our organisation, your initiative, talent, ideas, and energy all contribute to the impact that we can make with our work. Together we can achieve great things. Role: Software Engineer Location: Bangalore-Manyata Tech Park (Hybrid) Business & Team: Enterprise Services (ES) is responsible for the world leading application of technology and operations across every aspect of CommBank, from innovative product platforms for our customers to essential tools within our business. We also use technology to drive efficient and timely processing, an essential component of great customer service. CommBank is recognised as leading the industry in IT and operations with its world-class platforms and processes, agile IT infrastructure, and innovation in everything from payments to internet banking and mobile apps. Impact & contribution: The Software Engineer will be contributing towards building best in class centralised reporting platform. You will be involved from requirements, design, development, testing CICD, and implementation. Roles & Responsibilities: Have passion for solving problems using software Demonstrated experience operating within a mature SRE capability Have good experience in software development within an agile/DevOps environment Are ready to execute state-of-the-art coding practices, driving high quality outcomes to solve core business objectives and minimise risks Are capable to create both technology blueprints and engineering roadmaps, for a multi-year document management transformational journey Can lead and drive a culture where quality, excellence and openness are championed Constantly think outside the box and breaking boundaries to solve complex problems Essential Skills: Experience in Scripting languages, bash, powershell or any other structured language Experience in AWS cloud and cloud patterns, certified in AWS and Kubernetes desirable Experience in on-prem to cloud interop desirable DB knowledge API’s Serverless and microservice architecture GitHub, GitActions, TeamCity Kubernetes, Helm, EKS Architecture principles Experience maintaining monitoring solutions and CI/CD pipelines AWS Code Pipeline Web services, REST/SOAP/XML and API/Service based testing using tools such as POSTMAN and Pact Selenium, SonarQube, WiZ, Splunk, Observe, PagerDuty, Prometheus/Grafana, Terraform Atlassian Suite (JIRA, Confluence) OpenText knowledge – Documentum and InfoArchive desirable but not a mandate Education Qualifications Bachelor’s degree or Master’s degree in Engineering in Computer Science/Information Technology If you're already part of the Commonwealth Bank Group (including Bankwest, x15ventures), you'll need to apply through Sidekick to submit a valid application. We’re keen to support you with the next step in your career. We're aware of some accessibility issues on this site, particularly for screen reader users. We want to make finding your dream job as easy as possible, so if you require additional support please contact HR Direct on 1800 989 696. Advertising End Date: 26/06/2025
Posted 4 days ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Summary We are looking for a Senior Tech Lead – Java to drive the architecture, design, and development of scalable, high-performance applications. The ideal candidate will have expertise in Java, Spring Boot, Microservices, and AWS and be capable of leading a team of engineers in building enterprise-grade solutions. Key Responsibilities Lead the design and development of complex, scalable, and high-performance Java applications. Architect and implement Microservices-based solutions using Spring Boot. Optimize and enhance existing applications for performance, scalability, and reliability. Provide technical leadership, mentoring, and guidance to the development team. Work closely with cross-functional teams, including Product Management, DevOps, and QA, to deliver high-quality software. Ensure best practices in coding, testing, security, and deployment. Design and implement cloud-native applications using AWS services such as EC2, Lambda, S3, RDS, API Gateway, and Kubernetes. Troubleshoot and resolve technical issues and system bottlenecks. Stay up to date with the latest technologies and drive innovation within the team. Required Skills & Qualifications 8+ years of experience in Java development. Strong expertise in Spring Boot, Spring Cloud, and Microservices architecture. Hands-on experience with RESTful APIs, event-driven architecture, and messaging systems (Kafka, RabbitMQ, etc.). Deep understanding of database technologies such as MySQL, PostgreSQL, or NoSQL (MongoDB, DynamoDB, etc.). Experience with CI/CD pipelines and DevOps tools (Jenkins, Docker, Kubernetes, Terraform, etc.). Proficiency in AWS cloud services and infrastructure. Strong knowledge of security best practices, performance tuning, and monitoring. Excellent problem-solving skills and ability to work in an Agile environment. Strong communication and leadership skills Show more Show less
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.