Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
2.0 - 4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Cohesity is the leader in AI-powered data security. Over 13,600 enterprise customers, including over 85 of the Fortune 100 and nearly 70% of the Global 500, rely on Cohesity to strengthen their resilience while providing Gen AI insights into their vast amounts of data. Formed from the combination of Cohesity with Veritas’ enterprise data protection business, the company’s solutions secure and protect data on-premises, in the cloud, and at the edge. Backed by NVIDIA, IBM, HPE, Cisco, AWS, Google Cloud, and others, Cohesity is headquartered in Santa Clara, CA, with offices around the globe. We’ve been named a Leader by multiple analyst firms and have been globally recognized for Innovation, Product Strength, and Simplicity in Design , and our culture. Want to join the leader in AI-powered data security? Passionate about defending the world's data? Join Cohesity! Our passionate and highly skilled engineering team is proficient in building comprehensive data protection solutions to protect data of large enterprise customers across various on-premises, cloud environments. As a developer, you will be focused on developing and enhancing deployment and upgrade experience for large NetBackup deployments involving multiple hosts and making it seamless. How You’ll Spend Your Time Here Collaborate with stakeholders and team members to understand customer requirements and use cases. Brainstorm, design, and implement robust and scalable deployment automation solutions, ensuring timely delivery as per release milestones. Ensure high-quality output with diligent code reviews, thorough unit/automation testing, and stakeholder demos. Analyze, troubleshoot, and resolve complex issues found during internal testing and customer usage. WE’D LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING: Solid understanding of Windows/Linux operating systems and networking fundamentals. Proficiency and hands-on development experience (2 to 4 years) in Java, network programming, RESTful web services, and exposure to Python. Exposure to Ansible and Terraform. Strong coding, analytical, debugging, and troubleshooting skills. Understanding of cloud, data security, management, and protection concepts is a big plus. Highly motivated and passionate problem-solver who can dive deep to solve complex problems/issues and build quality products. Strong collaborator with great communication skills. Data Privacy Notice For Job Candidates For information on personal data processing, please see our Privacy Policy . In-Office Expectations Cohesity employees who are within a reasonable commute (e.g. within a forty-five (45) minute average travel time) work out of our core offices 2-3 days a week of their choosing. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
This is an incredible opportunity to be part of a company that has been at the forefront of AI and high-performance data storage innovation for over two decades. DataDirect Networks (DDN) is a global market leader renowned for powering many of the world's most demanding AI data centers, in industries ranging from life sciences and healthcare to financial services, autonomous cars, Government, academia, research and manufacturing. "DDN's A3I solutions are transforming the landscape of AI infrastructure." – IDC “The real differentiator is DDN. I never hesitate to recommend DDN. DDN is the de facto name for AI Storage in high performance environments” - Marc Hamilton, VP, Solutions Architecture & Engineering | NVIDIA DDN is the global leader in AI and multi-cloud data management at scale. Our cutting-edge data intelligence platform is designed to accelerate AI workloads, enabling organizations to extract maximum value from their data. With a proven track record of performance, reliability, and scalability, DDN empowers businesses to tackle the most challenging AI and data-intensive workloads with confidence. Our success is driven by our unwavering commitment to innovation, customer-centricity, and a team of passionate professionals who bring their expertise and dedication to every project. This is a chance to make a significant impact at a company that is shaping the future of AI and data management. Our commitment to innovation, customer success, and market leadership makes this an exciting and rewarding role for a driven professional looking to make a lasting impact in the world of AI and data storage. About the Role You will lead the design and implementation of scalable, secure, and highly available infrastructure across both cloud and on-premise environments. This role demands a deep understanding of Linux systems, infrastructure automation, and performance tuning, especially in high-performance computing (HPC) setups. As a technical leader, you’ll collaborate closely with development, QA, and operations teams to drive DevOps best practices, tool adoption, and overall infrastructure reliability. Key Responsibilities: • Design, build, and maintain Linux-based infrastructure across cloud (primarily AWS) and physical data centers. • Implement and manage Infrastructure as Code (IaC) using tools such as CloudFormation, Terraform, Ansible, and Chef. • Develop and manage CI/CD pipelines using Jenkins, Git, and Gerrit to support continuous delivery. • Automate provisioning, configuration, and software deployments with Bash, Python, Ansible, etc. • Set up and manage monitoring/logging systems like Prometheus, Grafana, and ELK stack. • Optimize system performance and troubleshoot critical infrastructure issues related to networking, filesystems, and services. • Configure and maintain storage and filesystems including ext4, xfs, LVM, NFS, iSCSI, and potentially Lustre. • Manage PXE boot infrastructure using Cobbler/Kickstart, and create/maintain custom ISO images. • Implement infrastructure security best practices, including IAM, encryption, and firewall policies. • Act as a DevOps thought leader, mentor junior engineers, and recommend tooling and process improvements. • Maintain clear and concise documentation of systems, processes, and best practices. Collaborate with cross-functional teams to ensure reliable and scalable application delivery. Required Skills & Experience • 5+ years of experience in DevOps, SRE, or Infrastructure Engineering. • Deep expertise in Linux system administration, especially around storage, networking, and process control. • Strong proficiency in scripting (e.g., Bash, Python) and configuration management tools (Chef, Ansible). • Proven experience in managing on-premise data center infrastructure, including provisioning and PXE boot tools. • Familiar with CI/CD systems, Agile workflows, and Git-based source control (Gerrit/GitHub). • Experience with cloud services, preferably AWS, and hybrid cloud models. • Knowledge of virtualization (e.g., KVM, Vagrant) and containerization (Docker, Podman, Kubernetes). • Excellent communication, collaboration, and documentation skills Nice to Have • Hands-on with Lustre or other distributed/parallel filesystems. • Experience in HPC (High-Performance Computing) environments. • Familiarity with Kubernetes deployments in hybrid clusters Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role: Senior Databricks Engineer / Databricks Technical Lead/ Data Architect Location: Bangalore, Chennai, Delhi, Pune, Kolkata Primary Roles And Responsibilities Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills And Qualifications Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 6+ yrs. of IT experience and 3+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills: Python/ PySpark / Spark with Azure/ AWS Databricks Skills: neo4j,pig,mongodb,pl/sql,architect,terraform,hadoop,pyspark,impala,apache kafka,adfs,etl,data warehouse,spark,azure,data bricks,databricks,rdbms,cassandra,aws,unix shell scripting,circleci,python,azure synapse,hive,git,kinesis,sql Show more Show less
Posted 2 days ago
40.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Greater Chennai Area
On-site
Job Responsibilities: Candidate should be an IaC (Infrastructure as Code) developer for this role. Candidate should expose to creating/updating AWS service(S3, EC2, SQS, Cloudformation, Lambda, KMS, ECS, ECR, Apigateway, Secret Manager, etc) using CDK (Cloud Development Kit). Candidate should know the GitHub action, python. Required Skills: CDK Cloudformation Lambda Code pipeline SQS AWS (ec2, kms, secretmanager, ssm, etc ), GitHub Python Terraform Nice to Have: Shell Scripting Strong knowledge in CI/CD Experience with container orchestration Development skills in JavaScript, TypeScript, and Python GitHub Actions Knowledge in SQL & NoSQL Databases Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
Greater Chennai Area
Remote
Your work days are brighter here. At Workday, it all began with a conversation over breakfast. When our founders met at a sunny California diner, they came up with an idea to revolutionize the enterprise software market. And when we began to rise, one thing that really set us apart was our culture. A culture which was driven by our value of putting our people first. And ever since, the happiness, development, and contribution of every Workmate is central to who we are. Our Workmates believe a healthy employee-centric, collaborative culture is the essential mix of ingredients for success in business. That’s why we look after our people, communities and the planet while still being profitable. Feel encouraged to shine, however that manifests: you don’t need to hide who you are. You can feel the energy and the passion, it's what makes us unique. Inspired to make a brighter work day for all and transform with us to the next stage of our growth journey? Bring your brightest version of you and have a brighter work day here. At Workday, we value our candidates’ privacy and data security. Workday will never ask candidates to apply to jobs through websites that are not Workday Careers. Please be aware of sites that may ask for you to input your data in connection with a job posting that appears to be from Workday but is not. In addition, Workday will never ask candidates to pay a recruiting fee, or pay for consulting or coaching services, in order to apply for a job at Workday. About The Team The Database Engineering team at Workday designs, builds, develops, maintains, and supervises database infrastructure, ensuring that all of Workday's data related needs are met with dedication and scale, while providing high availability that our customers expect from Workday. We are a fast paced and diverse team of database specialists and software engineers responsible for designing, automating, managing, and running the databases on Private and Public Cloud Platforms. We are looking for individuals who have strong experience in backend development specializing in database as a service with deep experience in Open-Source database technologies like MySQL, PostgreSQL, CloudSQL and other Cloud Native database technologies. This role will suit someone who is adaptable, flexible, and able to succeed within an open collaborative peer environment. We would love to hear from you if you have hands-on experience in designing, developing, and managing enterprise level database systems with complex interdependencies and have a key focus on high-availability, clustering, security, performance, and scalability requirements! Our team is the driving force behind all Workday operations, providing crucial support for all Lifecycle Engineering Operations. We ensure that Workday’s maintenance and releases proceed without a hitch and are at the forefront of accelerating the transition to the Public Cloud. We enable Workday’s Customer Success- 60% of Fortune 500 companies, 8000+ customers, 55M+ Workers. About The Role Are you passionate about database technologies? Do you love to solve complex, large-scale database challenges in the world today using code and as a service? If yes, then read on! This position is responsible for managing and monitoring Workday's production Database Infrastructure. Focus on automation to improve availability and scalability in our production environments. Work with developers to improve database resiliency and improve/implement auto remediation techniques. Provide support for large scale database instances across production, non-production and development environments. Serve in a rotational on-call and weekly maintenance supporting database infrastructure. About You Basic Qualifications: 5+ years of experience in managing and automating mission critical production workloads on MySQL, PostgreSQL, CloudSQL and other Cloud native databases. Hands-on experience with at least one Cloud technology: AWS, GCP and/or Azure Experience managing clustered, highly available database services deployed on different flavors of Linux. Experience in backend development using modern programming languages (Python, Golang,) Bachelor's degree in a computer related field or equivalent work experience Other Qualifications: Knowledge of automation tools such as Terraform, Chef, GitHub, JIRA confluence and Ansible. Working experience in modern DevOps technologies and container orchestration (Kubernetes, Docker), service deployment, monitoring and scaling. Strong scripting experience in multiple languages such as shell, python, ruby etc. Experience with database architecture, design, replication, clustering, HA/DR Strong analytical, debugging, and interpersonal skills. Self-starter, highly motivated and ability to learn quickly. Excellent team player with strong collaboration, analytical, verbal, and written communication skills Our Approach to Flexible Work With Flex Work, we’re combining the best of both worlds: in-person time and remote. Our approach enables our teams to deepen connections, maintain a strong community, and do their best work. We know that flexibility can take shape in many ways, so rather than a number of required days in-office each week, we simply spend at least half (50%) of our time each quarter in the office or in the field with our customers, prospects, and partners (depending on role). This means you'll have the freedom to create a flexible schedule that caters to your business, team, and personal needs, while being intentional to make the most of time spent together. Those in our remote "home office" roles also have the opportunity to come together in our offices for important moments that matter. Are you being referred to one of our roles? If so, ask your connection at Workday about our Employee Referral process! Show more Show less
Posted 2 days ago
40.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Job Description Analyze, design develop, troubleshoot and debug software programs for commercial or end user applications. Writes code, completes programming and performs testing and debugging of applications. Career Level - IC3 Responsibilities As a member of the software engineering division, you will perform high-level design based on provided external specifications. Specify, design and implement minor changes to existing software architecture. Build highly complex enhancements and resolve complex bugs. Build and execute unit tests and unit plans. Review integration and regression test plans created by QA. Communicate with QA and porting engineering as necessary to discuss minor changes to product functionality and to ensure quality and consistency across specific products. Responsibilities Working with the team to develop and maintain full stack SaaS solutions. Collaborate with engineering and product teams, contribute to the definition of specifications for new features, and own the development of those features. Define and implement web services and the application backend microservices. Implement and/or assist with the web UI/UX development. Be a champion for cloud native best practices. Have proactive mindset about bug fixes, solving bottlenecks and addressing performance issues. Maintain code quality, organization, and automatization. Ensure testing strategy is followed within the team. Support the services you build in production. Essential Skills And Background Expert knowledge of Java Experience with micro-service development at scale. Experience working with Kafka Experience with automated test frameworks at the unit, integration and acceptance levels. Use of source code management systems such as git Preferred Skills And Background Knowledge of issues related to scalable, fault-tolerant architectures. Knowledge of Python Experience with SQL and RDMS (Oracle and/or MySQL preferred). Experience deploying applications in Kubernetes with Helm Experience with devops tools such as Prometheus and Grafana. Experience in Agile development methodology. Experience in terraform is preferred. Use of build tools like gradle and maven Qualifications Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less
Posted 2 days ago
5.0 years
0 Lacs
India
Remote
Job Title : Devops Engineer Experience: 5+ Years Type: Contract (Short Term) Location: Remote Work Timing: UAE Time Zone Job Description We are seeking a skilled and motivated DevOps Engineer to join our team on a short-term contract basis. You’ll play a critical role in automating and streamlining operations, building and maintaining tools for deployment and monitoring, and ensuring the reliability and performance of our environments. Responsibilities: Automate infrastructure using tools like Terraform, Ansible, or CloudFormation. Collaborate across teams to ensure scalability, performance, and availability. Monitor system performance and troubleshoot issues across application, database, and infrastructure layers. Implement and manage container orchestration tools like Kubernetes or Docker Swarm. Follow and enforce security best practices in CI/CD and infrastructure processes. Manage and maintain cloud infrastructure on AWS, Azure, or GCP. Develop and maintain scripts/tools to enhance operational efficiency. Design, implement and maintain CI/CD pipelines for multiple applications Skills and Requirements: CI/CD tools: Jenkins, GitLab CI, Azure DevOps, CircleCI Containers & Orchestration: Docker, Kubernetes Cloud: AWS, Azure, GCP IaC & Automation: Terraform, Ansible, CloudFormation Scripting: Bash, Python, Groovy Monitoring: Prometheus, Grafana, ELK Stack, Splunk Configuration Management: Puppet, Chef OS & Networking: Linux, System Administration, Security Agile & DevOps practices Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
India
Remote
Location: Any metropolitan city Experience: 6+ years Key Focus: Java, DevOps, CI/CD, Docker, Kubernetes, Terraform About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. We are looking for a Senior DevOps Engineer (Java programming experience) to join our team, enabling our customer success. DevOps with development experience. At least 6+ years of solid experience BS or MS in Computer Science, Software Engineering, or a related technical field, or equivalent practical experience. 2+ years of experience as a full-stack engineer with strong proficiency in Java for backend development, Maven, and OpenRewrite for automated code upgrades Expertise in building, architecting, and deploying scalable, secure, and high-performance full-stack applications. Some experience designing and implementing RESTful APIs and microservices using Spring Boot. Experience designing infrastructure on AWS, including considerations for scalability, resilience, and cost-efficiency. Solid knowledge of core AWS services (e.g. EC2, S3, RDS, Lambda, API Gateway, CloudFormation, ECS/EKS) and deploying applications in cloud environments. Experience with DevOps technologies, including Automation, CI/CD, and Configuration Management. Hands-on experience with IaC, preferably using Terraform Very good knowledge of container technologies like Docker and Kubernetes. Knowledge of CI/CD pipelines, preferably using Jenkins, Argo Workflows, and experience automating build and deployment processes. Agile mindset, with a strong ability to collaborate in a cross-functional environment and mentor junior engineers. Excellent communication skills, with a commitment to clear, transparent, and proactive collaboration. Fluency in English (mandatory). Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
India
Remote
Location: Any metroplitan city Experience: 6+ Years Key Focus: Python, PostgreSQL, FastAPI, DevOps, CI/CD, AWS, Kubernetes, and Terraform. About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. We are looking for a Senior Python AWS Developer to join our team, enabling our customer success. Key Responsibilities Participate in solution investigation, estimations, planning, and alignment with other teams; Design, implement, and deliver new features for the Personalization Engine Partner with the product and design teams to understand user needs and translate them into high-quality content solutions and features. Promote and implement test automation (e.g, unit tests, integration tests) Build and maintain CI/CD pipelines for continuous integration, development, testing, and deployment. Deploy applications on the cloud using technologies such as Docker, Kubernetes, AWS, and Terraform. Work closely with the team in an agile and collaborative environment. This will involve code reviews, pair programming, knowledge sharing, and incident coordination. Maintain existing applications and reduce technical debt. Qualifications Must have: 6+ years of experience in software development is preferred Experience with Python Experience with PostgreSQL Good understanding of data structures and clean code Able to understand and apply design patterns You are interested in DevOps philosophy Experience with FastAPI Willing to learn on the job Experience with relational and non-relational databases Empathetic and able to easily build relationships Good verbal and written communication skills Show more Show less
Posted 2 days ago
0 years
0 Lacs
Thiruvananthapuram, Kerala, India
On-site
We are seeking an experienced Azure DevOps Engineer to manage and optimize our cloud infrastructure, CI/CD pipelines, version control, and platform automation. The ideal candidate will be responsible for ensuring efficient deployments, security compliance, and operational reliability. This role requires collaboration with development, QA, and DevOps teams to enhance software delivery and infrastructure management. Key Responsibilities: 1. Infrastructure Management • Design and manage Azure-based infrastructure for scalable and resilient applications. • Implement and manage Azure Container Apps to support microservices-based architecture. 2. CI/CD Pipelines • Build and maintain CI/CD pipelines using GitHub Actions or equivalent tools. • Automate deployment workflows to ensure quick and reliable application delivery. 3. Version Control and Collaboration • Manage GitHub repositories, branching strategies, and pull request workflows. • Ensure repository compliance and enforce best practices for source control. 4. Platform Automation • Develop scripts and tooling to automate repetitive tasks and improve efficiency. • Use Infrastructure as Code (IaC) tools like Terraform or Bicep for resource provisioning. 5. Monitoring and Optimization • Set up monitoring and alerting for platform reliability using Azure Monitor and Application Insights. • Analyze performance metrics and implement optimizations for cost and efficiency improvements. 6. Collaboration and Support • Work closely with development, DevOps, and QA teams to streamline deployment processes. • Troubleshoot and resolve issues in production and non-production environments. 7. GitHub Management • Manage GitHub repositories, including permissions, branch policies, and pull request workflows. • Implement GitHub Actions for automated testing, builds, and deployments. • Enforce security compliance through GitHub Advanced Security features (e.g., secret scanning, Dependabot). • Design and implement branching strategies to support collaborative software development. • Maintain GitHub templates for issues, pull requests, and contributing guidelines. • Monitor repository usage, optimize workflows, and ensure scalability of GitHub services. 8. Operational Support • Maintain pipeline health and resolve incidents related to deployment and infrastructure. • Address defects, validate certificates, and ensure platform consistency. • Resolve issues with offline services, manage private runners, and apply security patches. • Monitor page performance using tools like Lighthouse. • Manage server maintenance, repository infrastructure, and access control. 9. Pipeline Development • Develop reusable workflows for builds, deployments, SonarQube integrations, Jira integrations, release notes, notifications, and reporting. • Implement branching and versioning management strategies. • Identify pipeline failures and develop automated recovery mechanisms. • Customize configurations for various projects (Mobile, Leapfrog, AEM/Hybris). 10. Testing Integration • Implement automated testing, feedback loops, and quality gates. • Manage SonarQube configurations, rulesets, and runner maintenance. • Maintain SonarQube EE deployment in Azure Container Apps. • Configure and integrate security tools like Dependabot and Snyk with Jira. 11. Work Collaboration Integration • Integrate JIRA for automatic ticket generation, story validation, and release management. • Configure Teams for API management, channels, and chat management. • Set up email alerting mechanisms. • Support IFS/CR process integration. Required Skills & Qualifications: • Cloud Platforms: Azure (Azure Container Apps, Azure Monitor, Application Insights). • CI/CD Tools: GitHub Actions, Terraform, Bicep. • Version Control: GitHub repository management, branching strategies, pull request workflows. • Security & Compliance: GitHub Advanced Security, Dependabot, Snyk. • Automation & Scripting: Terraform, Bicep, Shell scripting. • Monitoring & Performance: Azure Monitor, Lighthouse. • Testing & Quality Assurance: SonarQube, Automated testing. • Collaboration Tools: JIRA, Teams, Email Alerting. Preferred Qualifications: • Experience in microservices architecture and containerized applications. • Strong understanding of DevOps methodologies and best practices. • Excellent troubleshooting skills for CI/CD pipelines and infrastructure issues. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. About the role: Your responsibility as a MYSQL database administrator (DBA) will be the performance, integrity and security of a database. You'll be involved in the planning and development of the database, as well as in troubleshooting any issues on behalf of the users. Requirement : - 3 to 6 Years Experience - MySQL, AWS RDS, AWS AURORA working knowledge is Must - Replication - AWS Admin - User Management - Machine Creation (Manual or by Terraform) - AMI creation - Backup and restoration Why join us: ● A collaborative output driven program that brings cohesiveness across businesses through technology ● Improve the average revenue per use by increasing the cross-sell opportunities ● A solid 360 feedback from your peer teams on your support of their goals ● Respect, that is earned, not demanded from your peers and manager Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 21 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story! Show more Show less
Posted 2 days ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Role: Linux Administrator Experience: 6+ Years Location: Chennai Mandatory: Linux, GCP, AWS JD: Experience: o 6+ years of experience in cloud security, with a focus on enterprise product software in the cloud. o At least 3+ years of hands-on experience with major cloud platforms (AWS, Microsoft Azure, or Google Cloud Platform). o Proven experience with securing enterprise software applications and cloud infrastructures. o Strong background in securing complex, large-scale software environments with a focus on infrastructure security, data security, and application security. o Hands-on experience with the OWASP Top 10 and integrating security measures into cloud applications. o Experience with Hybrid Cloud environments and securing workloads that span on-premises and public cloud platforms. Technical Skills: o In-depth experience with cloud service models (IaaS, PaaS, SaaS) and cloud security tools (e.g., AWS Security Hub, Azure Security Center, GCP Security Command Center). o Expertise in securing enterprise applications, including web services, APIs, and microservices deployed in the cloud. o Strong experience with network security, encryption techniques, IAM policies, security automation, and vulnerability management in cloud environments. o Familiarity with container security (Docker, Kubernetes) and serverless computing security. o Hands-on experience with Infrastructure as Code (IaC) tools such as Terraform, CloudFormation, or similar tools. o Knowledge of regulatory compliance requirements such as SOC 2, GDPR, HIPAA, and how they apply to enterprise software hosted in the cloud. Show more Show less
Posted 2 days ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Greetings! Please find the JD below, if you find interested, please share your update profile asap to below mail id nitin.1@zensar.com. Note: Looking for only immediate joiners. Key Responsibilities: Develop and maintain test automation frameworks for frontend and backend systems using Java and related tools Write and execute automated test scripts for UI, API, and backend services. Test containerized applications in Docker and Kubernetes environments, and cloud platforms (AWS, Azure, GCP). Collaborate with developers and QA team members to identify test requirements and ensure test coverage. Integrate automated tests into CI/CD pipelines (e.g., Jenkins, GitLab CI). Analyze test results, identify bugs, and work with the development team to resolve issues. Maintain and enhance test environments and test data management, and write complex SQL queries Participate in Agile/Scrum ceremonies and contribute to sprint planning and retrospectives. Required Skills and Qualifications: Bachelor’s degree in computer science, Engineering, or a related field. 3+ years of experience in test automation using Java Strong programming skills in Java, JavaScript, or Python. Strong knowledge of Selenium WebDriver, TestNG/Junit. Experience with REST API testing using Postman or Rest Assured, and JSON/XML. Familiarity with version control systems (e.g., Git). Experience with CI/CD tools like Jenkins, Maven, or Gradle. Solid understanding of software testing principles, including functional, regression, integration, and performance testing. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Familiarity with microservices and container orchestration. Strong debugging and analytical skills Nice-to-Haves: Experience with AI/ML testing tools. Experience with BDD frameworks like Cucumber. Knowledge of cloud platforms (AWS, Azure, GCP). Knowledge of Infrastructure as Code (Terraform, Ansible). Familiarity with containerization tools like Docker and Kubernetes. Exposure to performance testing tools like JMeter or Gatling or K6 Exposure to mobile testing (Appium), Playwright Conduct security testing using OWASP ZAP or Burp Suite. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Description "Architect to lead our cloud infrastructure and automation initiatives on Google Cloud Platform (GCP). This pivotal role will be responsible for designing, implementing, and maintaining a robust, secure, and scalable platform that empowers our development teams to deliver high-quality software efficiently. You will be instrumental in driving our DevOps and DevSecOps practices, ensuring a seamless and secure software delivery lifecycle. Responsibilities: Platform Architecture and Design: Define and document the target state architecture for our GCP platform, considering scalability, reliability, security, and cost-effectiveness. Design and implement infrastructure-as-code (IaC) solutions using tools like Terraform or Cloud Deployment Manager. Architect and implement CI/CD pipelines leveraging GCP services and industry best practices. Develop and maintain platform standards, policies, and guidelines. Evaluate and recommend new GCP services and technologies to enhance our platform capabilities. DevOps Leadership and Implementation: Champion and drive the adoption of DevOps principles and practices across development, operations, and security teams. Establish and optimize automated build, test, and deployment processes. Implement robust monitoring, logging, and alerting solutions to ensure platform health and performance. Foster a culture of collaboration, automation, and continuous improvement. DevSecOps Integration: Lead the integration of security practices throughout the software development lifecycle (SDLC). Define and implement security controls within the infrastructure and CI/CD pipelines. Automate security testing and vulnerability management processes. Ensure compliance with relevant security standards and regulations. GCP Infrastructure Build and Management: Lead the provisioning and management of GCP resources, including compute, storage, networking, and databases. Optimize infrastructure for performance, availability, and cost efficiency. Implement disaster recovery and business continuity plans on GCP. Troubleshoot and resolve complex platform and infrastructure issues. Collaboration and Communication: Collaborate effectively with development teams, security engineers, and other stakeholders to understand their needs and provide platform solutions. Communicate technical concepts and solutions clearly and effectively to both technical and non-technical audiences. Provide guidance and mentorship to junior team members. Participate in architectural reviews and provide constructive feedback." Show more Show less
Posted 2 days ago
0.0 - 9.0 years
0 Lacs
Noida, Uttar Pradesh
On-site
Location: Noida Berger Tower, India Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in India’s growth story by sharing its technologies and expertise in Defence, Transport, Aerospace and Digital Identity and Security markets. Required Skills & Experience 6-9 years of experience in service delivery, technical operations, or DevOps roles. Hands-on experience with AWS (preferred) and/or Google Cloud Platform (GCP) . Proficiency in scripting/programming with Python, Ruby, Node.js, Java, Scala, or Golang . Deep experience with infrastructure-as-code tools such as Terraform and Ansible . Expertise in Docker and Kubernetes in production environments. Familiarity with monitoring and logging tools such as DataDog, Splunk, or Logstash . Working knowledge of load balancers/proxies such as HAProxy, NGINX, Apache, Istio, F5, or AWS ELB . Basic hands-on experience with relational databases like MySQL, PostgreSQL, Oracle, or SQL Server . Fluency in using Git and modern version control workflows. Excellent communication skills—written and verbal—are essential for effective collaboration across global teams Preferred Skills & Experience Hands-on experience with J2EE/JVM-based web applications , including JVM tuning and troubleshooting. Experience designing and deploying automated monitoring and alerting for cloud-native services. Familiarity with CI/CD pipelines and build tools such as Jenkins, Bamboo, TeamCity, Maven, Ant , and scripting with Groovy . Solid understanding of 12-Factor App methodology and microservices architecture . Exposure to emerging platforms like Cloud Foundry, OpenShift , and Serverless technologies . At Thales we provide CAREERS and not only jobs. With Thales employing 80,000 employees in 68 countries our mobility policy enables thousands of employees each year to develop their careers at home and abroad, in their existing areas of expertise or by branching out into new fields. Together we believe that embracing flexibility is a smarter way of working. Great journeys start here, apply now!
Posted 2 days ago
0.0 - 6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
Bangalore,Karnataka,India Job ID 763721 Join our Team Ericsson Enterprise Wireless Solutions (BEWS) is responsible for driving Ericsson’s Enterprise Networking and Security business. Our expanding product portfolio covers wide area networks, local area networks, and enterprise security. We are the #1 global market leader in Wireless-WAN enterprise connectivity and are rapidly growing in enterprise Private 5G networks and Secure Access Services Edge (SASE) solutions . Key Responsibilities Define and implement model validation processes and business success criteria in data science terms . Contribute to the architecture and data flow for machine learning models. Rapidly develop and iterate minimum viable solutions (MVS) that address enterprise needs. Conduct advanced data analysis and rigorous testing to enhance model accuracy and performance. Work with Data Architects to leverage existing data models and create new ones as required. Collaborate with product teams and business partners to industrialize machine learning models into Ericsson’s enterprise solutions . Build MLOps pipelines for continuous integration, continuous delivery, validation, and monitoring of AI/ML models. Design and implement effective big data storage and retrieval strategies (indexing, partitioning, etc.). Develop and maintain APIs for AI/ML models and optimize data pipelines. Lead end-to-end ML projects from conception to deployment. Stay updated on the latest ML advancements and apply best practices to enterprise AI solutions . Required Skills & Experience 4–6 years of hands-on experience in machine learning, AI, and data science . Strong knowledge of ML frameworks (Keras, TensorFlow, Spark ML, etc.). Proficiency in ML algorithms, deep learning, reinforcement learning (RL), and large language models (LLMs) . Expertise in MLOps , including model lifecycle management and monitoring. Experience with containerization & orchestration (Docker, Kubernetes, Helm charts). Hands-on expertise with workflow orchestration tools (Kubeflow, Airflow, Argo). Strong programming skills in Python and experience with C++, Scala, Java, R . Experience in API design & development for AI/ML models . Hands-on knowledge of Terraform for infrastructure automation. Familiarity with AWS services (Data Lake, Athena, SageMaker, OpenSearch, DynamoDB, Redshift). Strong understanding of self-hosted deployment of LLMs on AWS . Experience in RASA, LangChain, LangGraph, LlamaIndex, Django, Open Policy Agent . Working knowledge of vector databases, knowledge graphs, retrieval-augmented generation (RAG), agents, and agentic mesh architectures . Expertise in monitoring tools like Datadog for K8S environments. Ability to document, present , and communicate technical findings to business stakeholders . Proven ability to contribute to ML forums, patents, and research publications . Educational Qualifications B.Tech/B.E. in Computer Science , MCA, or a Master’s in Mathematics/Statistics from a top-tier institute . Join Ericsson and be part of a cutting-edge team that is revolutionizing enterprise AI, 5G, and security solutions . to shape the future of wireless connectivity! Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?
Posted 2 days ago
3.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Tesco India • Bengaluru, Karnataka, India • Hybrid • Full-Time • Permanent • Apply by 26-Nov-2025 About the role This role is ideal for a proactive go-getter who is eager to drive new technology adoption within the organizationFamiliarity with current monitoring and logging tools like NewRelic and Splunk is essentialThis role will work closely with Infrastructure as Code (IAC) tooling like Terraform and will have a strong understanding of open telemetry standardsThe Observability Engineer is a critical role in our organization, dedicated to ensuring the robustness, performance, and scalability of our infrastructure and applications through superior monitoring and observability practices What is in it for you At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles -simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. You will be responsible for Lead the design and implementation of observability solutions that provide deep insights into application performance, system health, and user experience. Establish and advocate for observability best practices across engineering teams. Work closely with the infrastructure teams to automate and optimize infrastructure provisioning and scaling using IAC tools like Terraform. Ensure infrastructure code is tested, reliable, and efficient. Champion the adoption of open telemetry standards to collect, process, and export telemetry data. Utilize and integrate monitoring tools like Dynatrace and Splunk to provide thorough insights and analytics. Drive the evaluation and adoption of new tools and technologies to keep the organization at the forefront of observability and monitoring practices. Collaborate with various engineering teams to ensure smooth adoption and transition to new technologies. Analyze existing monitoring and observability practices, identifying areas for improvement or optimization. You will need Foster a culture of continuous learning and improvement within the observability team and across the organization. Provide leadership, guidance, and mentoring to the observability team. Foster a collaborative and inclusive environment that encourages innovation and growth. About us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues Tesco Technology Today, our Technology team consists of over 5,000 experts spread across the UK, Poland, Hungary, the Czech Republic, and India. In India, our Technology division includes teams dedicated to Engineering, Product, Programme, Service Desk and Operations, Systems Engineering, Security & Capability, Data Science, and other roles. At Tesco, our retail platform comprises a wide array of capabilities, value propositions, and products, essential for crafting exceptional retail experiences for our customers and colleagues across all channels and markets. This platform encompasses all aspects of our operations – from identifying and authenticating customers, managing products, pricing, promoting, enabling customers to discover products, facilitating payment, and ensuring delivery. By developing a comprehensive Retail Platform, we ensure that as customer touchpoints and devices evolve, we can consistently deliver seamless experiences. This adaptability allows us to respond flexibly without the need to overhaul our technology, thanks to the creation of capabilities we have built. At Tesco, inclusion is at the heart of everything we do. We believe in treating everyone fairly and with respect, valuing individuality to create a true sense of belonging. It’s deeply embedded in our values — we treat people how they want to be treated. Our goal is to ensure all colleagues feel they can be themselves at work and are supported to thrive. Across the Tesco group, we are building an inclusive workplace that celebrates the diverse cultures, personalities, and preferences of our colleagues — who, in turn, reflect the communities we serve and drive our success. At Tesco India, we are proud to be a Disability Confident Committed Employer, reflecting our dedication to creating a supportive and inclusive environment for individuals with disabilities. We offer equal opportunities to all candidates and encourage applicants with disabilities to apply. Our fully accessible recruitment process includes reasonable adjustments during interviews - just let us know what you need. We are here to ensure everyone has the chance to succeed. We believe in creating a work environment where you can thrive both professionally and personally. Our hybrid model offers flexibility - spend 60% of your week collaborating in person at our offices or local sites, and the rest working remotely. We understand that everyone’s journey is different, whether you are starting your career, exploring passions, or navigating life changes. Flexibility is core to our culture, and we’re here to support you. Feel free to talk to us during your application process about any support or adjustments you may need.
Posted 2 days ago
0.0 - 5.0 years
0 Lacs
Delhi, Delhi
Remote
Location : Remote Experience : 3-5 years About the Job This is a full-time role for a Senior Backend Developer (SR1) specializing in Node.js . We are seeking an experienced developer with deep JavaScript/TypeScript expertise to lead technical initiatives, design robust architectures, and mentor team members. In this role, you'll provide technical leadership, implement complex features, and drive engineering excellence across projects. A strong emphasis is placed on candidates who not only understand but actively implement best practices in testing and object-oriented design to build highly reliable and maintainable systems. The job location is flexible with preference for the Delhi NCR region. Responsibilities Design and plan efficient solutions for complex problems, ensuring scalability and security, applying principles of robust software design and testability. Independently lead teams or initiatives, ensuring alignment with project goals. Prioritize and maintain quality standards, focusing on performance, security, and reliability, including advocating for and ensuring strong unit and functional test coverage. Identify and resolve complex issues, ensuring smooth project progress. Facilitate discussions to align team members on best practices and standards. Promote continuous improvement through effective feedback and coaching. Guide and mentor team members, providing support for their professional growth. Contribute to talent acquisition and optimize team processes for better collaboration. Lead complex project components from design to implementation. Provide technical project guidance and develop risk mitigation strategies. Drive technical best practices and implement advanced performance optimizations. Design scalable, efficient architectural solutions for backend systems. Propose innovative technological solutions aligned with business strategies. Develop internal training materials and knowledge sharing resources. Requirements Technical Skills Bachelor's or Master's degree in Computer Science, Engineering, or related field. 3-5 years of professional experience in Node.js backend development. Proven experience in designing and implementing comprehensive unit and functional tests for backend applications, utilizing frameworks like Jest, Mocha, Supertest, or equivalent. Solid understanding and practical application of Object-Oriented Design Patterns (e.g., Singleton, Factory, Strategy, Observer, Decorator) in building scalable, flexible, and maintainable Node.js applications. Expert knowledge of advanced debugging techniques (Node Inspector, async hooks, memory leak detection). Mastery of advanced TypeScript patterns including utility types and mapped types. Deep understanding of API security including JWT, OAuth, rate limiting, and CORS implementation. Extensive experience with caching strategies using Redis/Memcached. Proficiency with HTTP caching mechanisms including Cache-Control headers and ETags. Strong knowledge of security protocols including HTTPS, TLS/SSL, and data encryption methods (bcrypt, Argon2). Experience with static analysis tools for code quality and security. Solid understanding of GraphQL fundamentals including queries, mutations, and resolvers. Experience with message brokers like RabbitMQ, Kafka, or NATS for distributed systems. Proficiency with cloud providers (AWS, GCP, Azure) and their core services. Experience with serverless frameworks including AWS Lambda, Google Cloud Functions, or Azure Functions. Knowledge of cloud storage and database solutions like DynamoDB, S3, or Firebase. Expertise in logging and monitoring security incidents and system performance. Soft Skills Excellent cross-functional communication skills with ability to translate complex technical concepts. Technical leadership in discussions and decision-making processes. Effective knowledge transfer abilities through documentation and mentoring. Strong mentorship capabilities for junior and mid-level team members. Understanding of broader business strategy and ability to align technical solutions accordingly. Ability to lead complex project components and provide technical guidance. Strong problem-solving skills and systematic approach to troubleshooting. Effective risk assessment and mitigation planning. Collaborative approach to working with product, design, and frontend teams. Proactive communication style with stakeholders and team members. Ability to balance technical debt, feature development, and maintenance needs. Additional Preferred Qualifications Experience with load balancing and horizontal/vertical scaling strategies. Knowledge of database optimization techniques including connection pooling, replication, and sharding. Proficiency with Node.js performance tuning, including streams and async optimizations. Knowledge of advanced access control systems such as Attribute-based access control (ABAC) and OpenID Connect. Experience with CDN configuration and server-side caching strategies. Knowledge of event-driven architecture patterns and Command Query Responsibility Segregation (CQRS). Experience with load testing tools like k6 or Artillery. Familiarity with Infrastructure as Code using Terraform or Pulumi. Contributions to open-source projects or advanced technical certifications. Experience leading major feature implementations or system migrations.
Posted 2 days ago
0.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu
On-site
Job ID R-229455 Date posted 06/17/2025 Job Title: Consultant - Platform Engineer Career Level - C3 Introduction to role AstraZeneca is seeking an IT Integration Engineer to join our R&D IT Team. This role involves managing and maintaining our GxP-compliant product integrations, which are part of our Clinical Development Platforms and used across all therapeutic areas. As a member of our OCD Integration team, you will collaborate with Product Leads, DevOps Leads, and technical engineers to drive innovation and efficiency. Accountabilities Key Responsibilities: Build Integration pipelines in alignment with Standard architectural patterns. Create reusable artefacts wherever possible. Build User Guides and Best-practice for Tool adoption and usage. Utilize vendor-based products to build optimal solutions. Provide full-lifecycle tooling guidance and reusable artefacts to product teams. Collaborate with Integration Lead and vendor teams to build and manage integrations. Participate in continuous improvement discussions with business and IT stakeholders as part of the scrum team. Solve day-to-day BAU Tickets. Essential Skills/Experience At least 4-6 years of experience & hands-on in Snaplogic & its associated Snap Packs. At least 2+ years of experience related to API Terminologies and API Integration – preferably Mulesoft. Excellent SQL knowledge pertaining to Relational databases including RDS, Redshift, PostGres, DB2, Microsoft SQL Server, DynamoDB. Good understanding of ETL pipeline design. Adherence to IT Service Delivery Framework on AD activities. Ability to organize and prioritize work, meet deadlines, and work independently. Strong problem-solving skills. Experience with process tools (JIRA, Confluence). Ability to work independently and collaborate with people across the globe with diverse cultures and backgrounds. Experience working in agile teams using methodologies such as SCRUM, Kanban, and SAFe. Experience in integrating CI/CD processes into existing Change & Configuration Management scope (i.e., ServiceNow & Jira). Desirable Skills/Experience ITIL practices (change management, incident and problem management, and others). Experience in GxP or SOx regulated environments. Proficiency in developing, deploying, and debugging cloud-based applications using AWS. Exposure to AWS Cloud Engineering and CI/CD tools (such as Ansible, GitHub Actions, Jenkins). Exposure to Infrastructure As Code (CloudFormation, Terraform). Good understanding of AWS networking and security configuration. Passion for learning, innovating, and delivering valuable software to people. Experience with dynamic dashboards (e.g., PowerBI). Experience in Python programming. - Experience with Snaplogic & Mulesoft integration platforms - Administration (nice to have). When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. We are a purpose-led global organization that pushes the boundaries of science to discover and develop life-changing medicines. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Ready to make a difference? Apply now! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. Consultant - Platform Engineer Posted date Jun. 17, 2025 Contract type Full time Job ID R-229455 APPLY NOW Why choose AstraZeneca India? Help push the boundaries of science to deliver life-changing medicines to patients. After 45 years in India, we’re continuing to secure a future where everyone can access affordable, sustainable, innovative healthcare. The part you play in our business will be challenging, yet rewarding, requiring you to use your resilient, collaborative and diplomatic skillsets to make connections. The majority of your work will be field based, and will require you to be highly-organised, planning your monthly schedule, attending meetings and calls, as well as writing up reports. Who do we look for? Calling all tech innovators, ownership takers, challenge seekers and proactive collaborators. At AstraZeneca, breakthroughs born in the lab become transformative medicine for the world's most complex diseases. We empower people like you to push the boundaries of science, challenge convention, and unleash your entrepreneurial spirit. You'll embrace differences and take bold actions to drive the change needed to meet global healthcare and sustainability challenges. Here, diverse minds and bold disruptors can meaningfully impact the future of healthcare using cutting-edge technology. Whether you join us in Bengaluru or Chennai, you can make a tangible impact within a global biopharmaceutical company that invests in your future. Join a talented global team that's powering AstraZeneca to better serve patients every day. Success Profile Ready to make an impact in your career? If you're passionate, growth-orientated and a true team player, we'll help you succeed. Here are some of the skills and capabilities we look for. 0% Tech innovators Make a greater impact through our digitally enabled enterprise. Use your skills in data and technology to transform and optimise our operations, helping us deliver meaningful work that changes lives. 0% Ownership takers If you're a self-aware self-starter who craves autonomy, AstraZeneca provides the perfect environment to take ownership and grow. Here, you'll feel empowered to lead and reach excellence at every level — with unrivalled support when you need it. 0% Challenge seekers Adapting and advancing our progress means constantly challenging the status quo. In this dynamic environment where everything we do has urgency and focus, you'll have the ability to show up, speak up and confidently take smart risks. 0% Proactive collaborators Your unique perspectives make our ambitions and capabilities possible. Our culture of sharing ideas, learning and improving together helps us consistently set the bar higher. As a proactive collaborator, you'll seek out ways to bring people together to achieve their best. Responsibilities Job ID R-229455 Date posted 06/17/2025 Job Title: Consultant - Platform Engineer Career Level - C3 Introduction to role AstraZeneca is seeking an IT Integration Engineer to join our R&D IT Team. This role involves managing and maintaining our GxP-compliant product integrations, which are part of our Clinical Development Platforms and used across all therapeutic areas. As a member of our OCD Integration team, you will collaborate with Product Leads, DevOps Leads, and technical engineers to drive innovation and efficiency. Accountabilities Key Responsibilities: Build Integration pipelines in alignment with Standard architectural patterns. Create reusable artefacts wherever possible. Build User Guides and Best-practice for Tool adoption and usage. Utilize vendor-based products to build optimal solutions. Provide full-lifecycle tooling guidance and reusable artefacts to product teams. Collaborate with Integration Lead and vendor teams to build and manage integrations. Participate in continuous improvement discussions with business and IT stakeholders as part of the scrum team. Solve day-to-day BAU Tickets. Essential Skills/Experience At least 4-6 years of experience & hands-on in Snaplogic & its associated Snap Packs. At least 2+ years of experience related to API Terminologies and API Integration – preferably Mulesoft. Excellent SQL knowledge pertaining to Relational databases including RDS, Redshift, PostGres, DB2, Microsoft SQL Server, DynamoDB. Good understanding of ETL pipeline design. Adherence to IT Service Delivery Framework on AD activities. Ability to organize and prioritize work, meet deadlines, and work independently. Strong problem-solving skills. Experience with process tools (JIRA, Confluence). Ability to work independently and collaborate with people across the globe with diverse cultures and backgrounds. Experience working in agile teams using methodologies such as SCRUM, Kanban, and SAFe. Experience in integrating CI/CD processes into existing Change & Configuration Management scope (i.e., ServiceNow & Jira). Desirable Skills/Experience ITIL practices (change management, incident and problem management, and others). Experience in GxP or SOx regulated environments. Proficiency in developing, deploying, and debugging cloud-based applications using AWS. Exposure to AWS Cloud Engineering and CI/CD tools (such as Ansible, GitHub Actions, Jenkins). Exposure to Infrastructure As Code (CloudFormation, Terraform). Good understanding of AWS networking and security configuration. Passion for learning, innovating, and delivering valuable software to people. Experience with dynamic dashboards (e.g., PowerBI). Experience in Python programming. - Experience with Snaplogic & Mulesoft integration platforms - Administration (nice to have). When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. We are a purpose-led global organization that pushes the boundaries of science to discover and develop life-changing medicines. Our work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Ready to make a difference? Apply now! AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements. APPLY NOW Explore the local area Take a look at the map to see what’s nearby. Reasons to Join Thomas Mathisen Sales Representative Oslo, Norway Christine Recchio Sales Representative California, United States Stephanie Ling Sales Representative Petaling Jaya, Malaysia What we offer We're driven by our shared values of serving people, society and the planet. Our people make this possible, which is why we prioritise diversity, safety, empowerment and collaboration. Discover what a career at AstraZeneca could mean for you. Lifelong learning Our development opportunities are second to none. You'll have the chance to grow your abilities, skills and knowledge constantly as you accelerate your career. From leadership projects and constructive coaching to overseas talent exchanges and global collaboration programmes, you'll never stand still. Autonomy and reward Experience the power of shaping your career how you want to. We are a high-performing learning organisation with autonomy over how we learn. Make big decisions, learn from your mistakes and continue growing — with performance-based rewards as part of the package. Health and wellbeing An energised work environment is only possible when our people have a healthy work-life balance and are supported for their individual needs. That's why we have a dedicated team to ensure your physical, financial and psychological wellbeing is a top priority. Inclusion and diversity Diversity and inclusion are embedded in everything we do. We're at our best and most creative when drawing on our different views, experiences and strengths. That's why we're committed to creating a workplace where everyone can thrive in a culture of respect, collaboration and innovation.
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 days ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 2 days ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or equivalent practical experience. Experience in automating infrastructure provisioning, Developer Operations (DevOps), integration, or delivery. Experience in networking, compute infrastructure (e.g., servers, databases, firewalls, load balancers) and architecting, developing, or maintaining cloud solutions in virtualized environments. Experience in scripting with Terraform and Networking, DevOps, Security, Compute, Storage, Hadoop, Kubernetes, or Site Reliability Engineering. Preferred qualifications: Certification in Cloud with experience in Kubernetes, Google Kubernetes Engine, or similar. Experience with customer-facing migration including service discovery, assessment, planning, execution, and operations. Experience with IT security practices like identity and access management, data protection, encryption, certificate and key management. Experience with Google Cloud Platform (GCP) techniques like prompt engineering, dual encoders, and embedding vectors. Experience in building prototypes or applications. Experience in one or more of the following disciplines: software development, managing operating system environments (Linux or related), network design and deployment, databases, storage systems. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Provide domain expertise in cloud platforms and infrastructure to solve cloud platform tests. Work with customers to design and implement cloud based technical architectures, migration approaches, and application optimizations that enable business objectives. Be a technical advisor and perform troubleshooting to resolve technical tests for customers. Create and deliver best practice recommendations, tutorials, blog articles, and sample code. Travel up to 30% for in-region for meetings, technical reviews, and onsite delivery activities. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form . Show more Show less
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
Kanayannur, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 days ago
1.0 - 3.0 years
0 Lacs
Trivandrum, Kerala, India
Remote
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Career Family - TechOps -CloudOps Role Type - Cloud Operation Engineer - AWS and Azure The opportunity We are looking for a Staff CloudOps Engineer with 1-3 years of hands-on experience in AWS and Azure environments. The primary focus of this role is supporting DevOps practices, including CI/CD pipelines, automation scripting, and container orchestration. The role also involves contributing to basic cloud infrastructure management and support. You will assist in troubleshooting, support deployment pipelines, and participate in operations across cloud-native environments. Your Key Responsibilities Assist in resolving infrastructure and DevOps-related incidents and service requests. Support CI/CD pipeline operations and automation workflows. Implement infrastructure as code using Terraform. Monitor platform health using native tools like AWS CloudWatch and Azure Monitor. Collaborate with CloudOps and DevOps teams to address deployment or configuration issues. Maintain and update runbooks, SOPs, and automation scripts as needed. Skills And Attributes For Success Working knowledge of AWS and Azure core services. Experience with Terraform; exposure to CloudFormation or ARM templates is a plus. Familiarity with Docker, Kubernetes (EKS/AKS), and Helm. Basic scripting in Bash; knowledge of Python is a plus. Understanding of ITSM tools such as ServiceNow. Knowledge of IAM, security groups, VPC/VNet, and basic networking. Strong troubleshooting and documentation skills. To qualify for the role, you must have 1-3 years of experience in CloudOps, DevOps, or cloud infrastructure support. Hands-on experience in supporting cloud platforms like AWS and/or Azure. Familiarity with infrastructure automation, CI/CD pipelines, and container platforms. Relevant cloud certification (AWS/Azure) preferred. Willingness to work in a 24x7 rotational shift-based support environment. No location constraints Technologies and Tools Must haves Cloud Platforms: AWS, Azure Infrastructure as Code: Terraform (hands-on) CI/CD: Basic experience with GitHub Actions, Azure DevOps, or AWS CodePipeline Containerization: Exposure to Kubernetes (EKS/AKS), Docker Monitoring: AWS CloudWatch, Azure Monitor Scripting: Bash Incident Management: Familiarity with ServiceNow or similar ITSM tool Good to have Templates: CloudFormation, ARM templates Scripting: Python Security: IAM Policies, RBAC Observability: Datadog, Splunk, OpenTelemetry Networking: VPC/VNet basics, load balancers Certification: AWS/Azure (Associate-level preferred) What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 2 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.
These cities are known for their strong tech presence and have a high demand for Terraform professionals.
The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.
In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.
Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.
plan
and apply
commands. (medium)As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.