Jobs
Interviews

581 Github Actions Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

12 - 18 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are hiring an experienced Integration Engineer with deep expertise in Dell Boomi and proven skills in Python, AWS, and automation frameworks. This role focuses on building and maintaining robust integration pipelines between enterprise systems like Salesforce, Snowflake, and EDI platforms, enabling seamless data flow and test automation. Key Responsibilities: Design, develop, and maintain integration workflows using Dell Boomi. Build and enhance backend utilities and services using Python to support Boomi integrations. Integrate test frameworks with AWS services such as Lambda, API Gateway, CloudWatch, etc. Develop utilities for EDI document automation (e.g., generating and validating EDI 850 purchase orders). Perform data syncing and transformation between systems like Salesforce, Boomi, and Snowflake. Automate post-test data cleanup and validation within Salesforce using Boomi and Python. Implement infrastructure-as-code using Terraform to manage cloud resources. Create and execute API tests using Postman, and automate test cases using Cucumber and Gherkin. Integrate test results into Jira and X-Ray for traceability and reporting. Must-Have Qualifications: 5 to 7 years of professional experience in software or integration development. Strong hands-on experience with Dell Boomi (Atoms, Integration Processes, Connectors, APIs). Solid programming experience with Python. Experience working with AWS services: Lambda, API Gateway, CloudWatch, S3, etc. Working knowledge of Terraform for cloud infrastructure automation. Familiarity with SQL and modern data platforms (e.g., Snowflake). Experience working with Salesforce and writing SOQL queries. Understanding of EDI document standards and related integration use cases. Test automation experience using Cucumber, Gherkin, Postman. Integration of QA/test reports with Jira, X-Ray, or similar platforms. Familiarity with CI/CD tools like GitHub Actions, Jenkins, or similar. Tools & Technologies: Integration: Dell Boomi, REST/SOAP APIs Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, S3) Infrastructure: Terraform Data Platforms: Snowflake, Salesforce Automation & Testing: Cucumber, Gherkin, Postman DevOps: Git, GitHub Actions Tracking/Reporting: Jira, X-Ray Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 21 Lacs

Pune

Work from Office

Lead Mobile Developers (Android & iOS) Android - Kotlin, Jetpack Compose, MVVM, biometric auth 6+ years building financial-grade Android app IOS -Swift, Combine, CoreData, Keychain 6+ years in Swift development with banking security compliance Flexi working Work from home

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 10 Lacs

Pune

Work from Office

Assist in developing AI-powered QA automation tools using LLMs Collaborate with QA engineers to understand test requirements and translate them into automated scripts Use LLMs to generate test cases, validation scripts, and test data automatically Maintain, debug, and optimize automated test scripts and frameworks Integrate LLM-based solutions into existing CI/CD pipelines and QA workflows Document automation processes and assist with training teams on new AI-driven QA tools Continuously research and apply advancements in LLMs and AI for QA improvements To ensure youre set up for success, you will bring the following skillset & experience: You have 3-5 years for software development experience You have experince of basic programming skills in Python, JavaScript, or relevant languages for automation You have understanding of software testing concepts and QA automation tools (e.g., Selenium, Cypress, JUnit) You have some experience in AI, NLP, or machine learning concepts You are familiarity with version control systems such as Git You have strong problem-solving skills and ability to learn quickly in a dynamic environment Whilst these are nice to have, our team can help you develop in the following skills: Experience with AI/ML frameworks like PyTorch or TensorFlow Familiarity with Large Language Models such as GPT, BERT, or OpenAI APIs Knowledge of continuous integration tools (Jenkins, GitHub Actions) Exposure to writing or maintaining automated test frameworks Understanding of cloud environments for deploying automation solutions

Posted 1 month ago

Apply

5.0 - 10.0 years

5 - 10 Lacs

Chennai, Tamil Nadu, India

On-site

Design and implement scalable AI platform solutions to support machine learning workflows. Experience building and delivering software using the Python programming language, exceptional ability in other programming languages will be considered. Demonstratable experience deploying the underlying infrastructure and tooling for running Machine Learning or Data Science at Scale using Infrastructure of Code Experience using DevOps to enable automation strategies Experience or awareness of MLOps practices and building pipelines to accelerate and automate machine learning will be looked upon favorably Manage and optimize the deployment of applications on Amazon EKS (Elastic Kubernetes Service). Implement Infrastructure as Code using tools like Terraform or AWS CloudFormation. Provision and scale AI platforms such as Domino Data Labs, Databricks, or similar systems. Collaborate with cross-functional teams to integrate AI solutions into the AWS cloud infrastructure. Drive automation and Develop DevOps pipelines using GitHub and GitHub Actions. Ensure high availability and reliability of AI platform services. Monitor and troubleshoot system performance, providing quick resolutions. Stay updated with the latest industry trends and advancements in AI and cloud technologies. Experience working with GxP compliant life science systems will be looked upon favorably Qualifications: Proven hands-on experience with Amazon EKS and AWS cloud services. Strong expertise in Infrastructure as Code with Terraform and AWS CloudFormation. Strong expertise with Python programming. Experience in provisioning and scaling AI platforms like Domino Data Labs, Databricks, or similar systems. Solid understanding of DevOps principles and experience with CI/CD tools like GitHub Actions. Familiarity with version control using Git and GitHub. Excellent problem-solving skills and the ability to work independently and in a team. Strong communication and collaboration skills.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Chennai, Tamil Nadu, India

On-site

Design, implement, and manage cloud infrastructure on AWS using Infrastructure as Code (IaC) tools such as Terraform or AWS CloudFormation. Maintain and enhance CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, Jenkins, or ArgoCD. Ensure platform reliability, scalability, and high availability across development, staging, and production environments. Automate operational tasks, environment provisioning, and deployments using scripting languages such as Python, Bash, or PowerShell. Enable and maintain Amazon SageMaker environments for scalable ML model training, hosting, and pipelines. Integrate AWS Bedrock to provide foundation model access for generative AI applications, ensuring security and cost control. Manage and publish curated infrastructure templates through AWS Service Catalogue to enable consistent and compliant provisioning. Collaborate with security and compliance teams to implement best practices around IAM, encryption, logging, monitoring, and cost optimization. Implement and manage observability tools like Amazon CloudWatch, Prometheus/Grafana, or ELK for monitoring and alerting. Support container orchestration environments using EKS (Kubernetes), ECS, or Fargate. Contribute to incident response, post-mortems, and continuous improvement of the platform operational excellence. Required Skills & Qualifications: Bachelors degree in Computer Science, Engineering, or related field (or equivalent experience). 5plus years of hands on experience with AWS cloud services. Strong experience with Terraform, AWS CDK, or CloudFormation. Proficiency in Linux system administration and networking fundamentals. Solid understanding of IAM policies, VPC design, security groups, and encryption. Experience with Docker and container orchestration using Kubernetes (EKS preferred). Hands-on experience with CI/CD tools and version control (Git). Experience with monitoring, logging, and alerting systems. Strong troubleshooting skills and ability to work independently or in a team. Preferred Qualifications (Nice to Have): AWS Certification (e.g., AWS Certified DevOps Engineer, Solutions Architect Associate/Professional). Experience with serverless technologies like AWS Lambda, Step Functions, and EventBridge. Experience supporting machine learning or big data workloads on AWS.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Pune

Remote

What You'll Do Reports to: Manager - Security Engineering Avalara is seeking a Security Automation Engineer to join our Security Automation & Platform Enhancement Team (SAPET). You will be at the intersection of cybersecurity, automation, and AI, focusing on designing and implementing scalable security solutions that enhance Avalara's security posture. You will have expertise in programming, cloud technologies, security automation, and modern software engineering practices, with experience with using Generative AI to improve security processes. What Makes This Role Unique at Avalara? Cutting-Edge Security Automation: You will work on advanced cybersecurity automation projects, including fraud detection, AI-based security document analysis, and IT security process automation. AI-Powered Innovation: We integrate Generative AI to identify risks, analyze security documents, and automate compliance tasks. Impact Across Multiple Security Domains: Your work will support AML, fraud detection, IT security, and vendor risk management. What Your Responsibilities Will Be As a Security Automation Engineer, your primary focus will be to develop automation solutions that improve efficiency across several security teams. Develop and maintain security automation solutions to streamline security operations and reduce manual efforts. Work on automation projects that augment security teams, enabling them to work more efficiently. Design and implement scalable security frameworks for Security Teams. What You'll Need to be Successful 5+ years experience Programming & Scripting: Python, GoLang, Bash Infrastructure as Code & Orchestration: Terraform, Kubernetes, Docker Security & CI/CD Pipelines: Jenkins, GitHub Actions, CI/CD tools Database & Data Analysis: SQL, security data analytics tools Experience with RDBMS and SQL, including database design, normalization, query optimization Experience. Hands-on experience with security automation tools, SIEM, SOAR, or threat intelligence platforms.

Posted 1 month ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

New Delhi, Chennai, Bengaluru

Hybrid

Your day at NTT DATA We are seeking an experienced Data Engineer to join our team in delivering cutting-edge Generative AI (GenAI) solutions to clients. The successful candidate will be responsible for designing, developing, and deploying data pipelines and architectures that support the training, fine-tuning, and deployment of LLMs for various industries. This role requires strong technical expertise in data engineering, problem-solving skills, and the ability to work effectively with clients and internal teams. What youll be doing Key Responsibilities: Design, develop, and manage data pipelines and architectures to support GenAI model training, fine-tuning, and deployment Data Ingestion and Integration: Develop data ingestion frameworks to collect data from various sources, transform, and integrate it into a unified data platform for GenAI model training and deployment. GenAI Model Integration: Collaborate with data scientists to integrate GenAI models into production-ready applications, ensuring seamless model deployment, monitoring, and maintenance. Cloud Infrastructure Management: Design, implement, and manage cloud-based data infrastructure (e.g., AWS, GCP, Azure) to support large-scale GenAI workloads, ensuring cost-effectiveness, security, and compliance. Write scalable, readable, and maintainable code using object-oriented programming concepts in languages like Python, and utilize libraries like Hugging Face Transformers, PyTorch, or TensorFlow Performance Optimization: Optimize data pipelines, GenAI model performance, and infrastructure for scalability, efficiency, and cost-effectiveness. Data Security and Compliance: Ensure data security, privacy, and compliance with regulatory requirements (e.g., GDPR, HIPAA) across data pipelines and GenAI applications. Client Collaboration: Collaborate with clients to understand their GenAI needs, design solutions, and deliver high-quality data engineering services. Innovation and R&D: Stay up to date with the latest GenAI trends, technologies, and innovations, applying research and development skills to improve data engineering services. Knowledge Sharing: Share knowledge, best practices, and expertise with team members, contributing to the growth and development of the team. Bachelors degree in computer science, Engineering, or related fields (Masters recommended) Experience with vector databases (e.g., Pinecone, Weaviate, Faiss, Annoy) for efficient similarity search and storage of dense vectors in GenAI applications 5+ years of experience in data engineering, with a strong emphasis on cloud environments (AWS, GCP, Azure, or Cloud Native platforms) Proficiency in programming languages like SQL, Python, and PySpark Strong data architecture, data modeling, and data governance skills Experience with Big Data Platforms (Hadoop, Databricks, Hive, Kafka, Apache Iceberg), Data Warehouses (Teradata, Snowflake, BigQuery), and lakehouses (Delta Lake, Apache Hudi) Knowledge of DevOps practices, including Git workflows and CI/CD pipelines (Azure DevOps, Jenkins, GitHub Actions) Experience with GenAI frameworks and tools (e.g., TensorFlow, PyTorch, Keras) Nice to have: Experience with containerization and orchestration tools like Docker and Kubernetes Integrate vector databases and implement similarity search techniques, with a focus on GraphRAG is a plus Familiarity with API gateway and service mesh architectures Experience with low latency/streaming, batch, and micro-batch processing Familiarity with Linux-based operating systems and REST APIs

Posted 1 month ago

Apply

7.0 - 10.0 years

38 - 40 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Description We are seeking an experienced DevOps Engineer with a strong background in GitHub Actions, Azure Kubernetes Service (AKS), and ArgoCD to join our dynamic team in India. The ideal candidate will have extensive experience in automating deployment processes and managing containerized applications. Responsibilities Design, implement, and maintain CI/CD pipelines using GitHub Actions. Manage and orchestrate containerized applications using Azure Kubernetes Service (AKS). Automate deployment processes and ensure reliable release management with ArgoCD. Monitor system performance and troubleshoot issues in collaboration with development teams. Implement best practices for infrastructure as code and continuous integration and delivery. Collaborate with cross-functional teams to understand requirements and provide technical solutions. Skills and Qualifications 7-10 years of experience in DevOps or related fields. Strong proficiency in GitHub Actions for CI/CD workflows. Hands-on experience with Azure Kubernetes Service (AKS) for container orchestration. Experience with ArgoCD for continuous delivery and GitOps practices. Solid understanding of cloud services, particularly Azure. Knowledge of scripting languages such as Bash, Python, or PowerShell. Familiarity with monitoring tools and practices, such as Prometheus, Grafana, or Azure Monitor. Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication skills and ability to work collaboratively in a team.

Posted 1 month ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Your role and responsibilities Grit, drive and a deep feeling of ownership. BS or MS in Computer Science or a related technical discipline. Strong experience in Python or equivalent language Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 5+ years of experience on DevOps or equivalent Strong experience on bash scripting , Strong experience with Kubernetes & Strong experience with Docker Strong experience with Terrafor & Experience with DevOps in Azure cloud Preferred technical and professional experience An understanding of application security and information security controls A good understanding of large-scale distributed systems in practice, including multi-tier architectures, application security, monitoring and storage systems. Working knowledge of GitHub Actions, Azure DevOps, Jenkins (or other similar toolset)

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 12 Lacs

Hyderabad

Hybrid

Azure DevOps Engineer Location: Hyderabad Work Mode: Hybrid (3 Days per week from Office) What You Will Do: . We are seeking a highly skilled and experienced Azure Cloud experience to join our team. The ideal candidate will be responsible for the further development and implementation of advanced cloud-based solutions on Legal and General's Microsoft Azure platform. The candidate also needs to be proficient with DevOps and DevSecOps processes and practices, with a strong knowledge of GitHub, including GitHub Actions and infrastructure as code scaffolding, particularly Terraform. Key Responsibilities: Lead the design and deployment of enterprise-wide azure solutions, ensuring they meet both functional and non-functional requirements, polices, principals and standards and are secure, scalable, and reliable. Oversee the integration of security tooling into Azure deployments to enhance security posture and compliance (a knowledge of Wiz will be beneficial, but not essential) Utilize GitHub for source control management and collaborate with development teams to implement GitHub Actions for automating workflows. Drive cloud migration strategies, including assessment, planning, and execution, with a focus on security and best practices. Stay abreast of the latest Azure features and capabilities, incorporating them into solution designs as appropriate. Collaborate with cross-functional teams to ensure the security, scalability, and performance of Azure infrastructure. Excellent oral and written communication skills being an Active Team Player Experience Range: 3-7 years Professional Attributes You Possess: Effective Communication Skills Strong analytical and problem-solving skills Excellent organizational skills and attention to detail Ability to function well in a high-paced environment Should be a team player and self-starter Should be a quick learner

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Develop EDS-compatible templates/components, implement reusable structures, integrate Adobe Commerce data, support content migration to AEM Assets, configure EDS pipelines, resolve rendering issues, and follow Adobe best practices. Required Candidate profile 5+ years of front-end and Adobe EDS/Cloud Migration experience; expertise in HTML/CSS/JS; GraphQL & Adobe Commerce integration; AEM Assets handling; Adobe App Builder/API experience is a plus.

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

We are hiring a DevOps / Site Reliability Engineer for a 6-month full-time onsite role in Pune (with possible extension). The ideal candidate will have 69 years of experience in DevOps/SRE roles with deep expertise in Kubernetes (preferably GKE), Terraform, Helm, and GitOps tools like ArgoCD or Flux. The role involves building and managing cloud-native infrastructure, CI/CD pipelines, and observability systems, while ensuring performance, scalability, and resilience. Experience in infrastructure coding, backend optimization (Node.js, Django, Java, Go), and cloud architecture (IAM, VPC, CloudSQL, Secrets) is essential. Strong communication and hands-on technical ability are musts. Immediate joiners only.

Posted 1 month ago

Apply

5.0 - 7.0 years

16 - 20 Lacs

Noida, Pune, Gurugram

Work from Office

Below are the key skills and qualifications we are looking for: Over 4 years of software development experience, with expertise in Python and familiarity with other programming languages such as Java and JavaScript. A minimum of 2 years of significant hands-on experience with AWS services, including Lambda and Step Functions. Domain knowledge in invoicing or billing is preferred, with experience on the Zuora platform (Billing and Revenue) being highly desirable. At least 2 years of working knowledge in SQL. Solid experience working with AWS cloud services, especially S3, Glue, Lambda, Redshift, and Athena. Experience with continuous integration/delivery (CI/CD) tools like Jenkins and Terraform. Excellent communication skills are essential. Design and implement backend services and APIs using Python. Build and maintain CI/CD pipelines using tools like GitHub Actions, AWS CodePipeline, or Jenkins. Optimize performance, scalability, and security of cloud applications. Implement logging, monitoring, and alerting for production workloads.

Posted 1 month ago

Apply

4.0 - 9.0 years

6 - 12 Lacs

Bengaluru

Hybrid

What you ll be doing As an SRE in the demo engineering team, you will be critical in building and securing Okta s demonstration infrastructure. Your expertise and contributions will directly impact our sales team s ability to demonstrate the value of our product. Specifically, your responsibilities will include: Developing, operating, and maintaining critical infrastructure (AWS, Lamda, DynamoDB, Azure). Integration with 3rd-party tools and other infrastructure in the Okta environment Evangelising security best practices, leading initiatives to strengthen our security posture for critical infrastructure and managing security & compliance requirements. Developing and maintaining technical documentation, runbooks, and procedures Triaging and troubleshooting production issues to ensure reliability and performance Identifying and automating manual processes for scaling, onboarding and offboarding. Promoting and applying best practices for building scalable and reliable services across engineering Supporting a 24x7 customer facing environment, managing incidents and determining how we can prevent them in the future as part of an on-call rotation What you ll bring to the role 4+ years of experience as a site reliability or platform engineer, preferably in a fast-scaling environment Experience with the deployment of production workloads on public cloud infrastructure (AWS and Azure) Strong experience in configuration management using IaaS tools such as Terraform and Cloudformation Strong experience in security practices and network engineering in AWS Experience managing CI/CD infrastructures, with a strong proficiency in platforms like GitHub Actions to streamline deployment pipelines and ensure efficient software delivery Strong proficiency in Node.js for backend systems, demonstrating the ability to develop and maintain robust, scalable, and efficient software components essential for the reliability and performance of the infrastructure. Excellent problem-solving skills and a detail-oriented mindset. Ability to work independently with minimal supervision and guidance. Strong communication and collaboration abilities to work effectively within a team. "This role requires in-person onboarding and travel to our Bengaluru, IN office during the first week of employment."

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Job Duties and Responsibilities Drive the long-term architecture of our CI/CD platform and developer tools; be the thought Leader in optimizing for developer productivity. Design CI and automation components for the scale of Okta, which, at its peak, runs over 55 million tests a day. Set direction and influence the development of tools that support the end-to-end lifecycle of code management Act as an innovator by advising, recommending, and managing the introduction of new technology and practices Consults with other architects across the R&D and Infrastructure Engineering teams to ensure our solution is addressing the top concerns of engineers Build high-quality tools and automation for internal use to support continuous integration, continuous delivery, and developer productivity. Design software or customize software for engineering use with the aim of optimizing operational efficiency. Provide technical input by implementing Proof of Concept, influence the choice of the right technology, contribute to existing frameworks, and review design and code. Roll out deliverables to internal customers in phases, monitor adoption, collect feedback, and fine-tune the project to respond to internal customers needs. Support pre-prod infrastructure in the cloud--monitoring, backup and restore, SLA, cost control, deployment. Minimum REQUIRED Knowledge, Skills, and Abilities: In-depth understanding of application development, micro services architecture, and successful elements of a multi-service ecosystem In-depth knowledge of large-scale and high-transactional continuous integration systems aimed at performance, accuracy, and stability. Expert at Git, maven, build automation tools (e.g,. Jenkins, CircleCI, Github Actions) Experience with public cloud(AWS), its services, and its supporting tools (cost control, reporting, environment management). Experience working with Gradle, Bazel, Artifactory, Docker Registry, npm registry, and Github Administration. Experience in Kubernetes is a plus. Experience in developing/managing infrastructure as a service is a plus. Education and Training: B.S. in CS or equivalent

Posted 1 month ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Ahmedabad

Work from Office

This open position is for Armanino India LLP, which is located in India. Armanino India LLP is a fully owned subsidiary of Armanino. Job Description: We are seeking a highly skilled and motivated Cloud Development Manager (Azure) to oversee the development and deployment of cloud-based applications within the Microsoft Azure ecosystem. This role demands a hands-on technical expert, a strategic thinker, and a strong leader to drive innovation, efficiency, and best practices in cloud architecture, security, and scalability. The Cloud Development Manager (Azure) will collaborate closely with the Armanino product team, ensuring the delivery of high-quality solutions while adhering to industry best practices. Additionally, Cloud Development Manager (Azure) will play a key role in building and maintaining a dynamic, efficient, and skilled development team. Responsibilities: Lead and mentor Azure development teams to ensure seamless execution and high-quality delivery. Oversee the design, development, and deployment of scalable cloud-based applications. Drive Azure best practices, ensuring compliance, security, and performance optimization. Implement and manage Azure DevOps, CI/CD pipelines, and automation strategies to enhance efficiency. Collaborate with stakeholders, architects, and engineers to align technical solutions with business objectives. Work closely with the Armanino product team to align development efforts with organizational goals. Monitor cloud infrastructure, troubleshoot issues, and lead continuous improvements. Stay ahead of emerging Azure technologies, driving innovation and strategic adoption. Oversee developer operations, ensuring adherence to best practices and coding standards. Take ownership of the quality of work delivered by the development team. Assist in building and maintaining a cohesive, skilled, and motivated development team. Requirements: Bachelor s/Master s degree in Computer Science, Engineering, or a related field. 8+ years of hands-on experience in Azure cloud development & architecture with hands-on technical expertise in Azure services Deep understanding of serverless architecture principles, including event-driven programming, function as a service (FaaS), and designing scalable, stateless microservices Experience in designing, developing, and deploying microservices using C# , Java and frameworks Familiarity with DevOps methodologies and tools for continuous integration (CI) and continuous deployment (CD) pipelines such as Azure DevOps, GitHub Actions, etc Understanding with expertise of security best practices in serverless architectures, including identity and access management (IAM), encryption, network security, and compliance standards Demonstrated experience in leading and managing development teams. Ability to work independently and make sound decisions. Excellent written and verbal communication skills. Certifications in Microsoft Azure (e.g.Azure Solutions Architect, Azure DevOps Lead) and or related technologies preferred. Experience in microservices, containerization, and serverless computing preferred. Knowledge of AI/ML integration within Azure environments preferred. Experience with agile development methodologies (e.g. Scrum, POD) preferred. Strong problem-solving, technical expertise and analytical skills preferred. Knowledge of emerging trends and technologies in cloud computing preferred. Compensation and Benefits: Compensation: Commensurate with Industry standards Other Benefits: Provident Fund, Gratuity, Medical Insurance, Group Personal Accident Insurance etc. employment benefits depending on the position.

Posted 1 month ago

Apply

6.0 - 9.0 years

18 - 20 Lacs

Pune

Work from Office

Timings: Full Time (As per company timings) Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 510 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Key Responsibilities GitHub Actions Development: Design, implement, and optimize CI/CD workflows using GitHub Actions to support multi-environment deployments. Leverage GitHub Actions for automated builds, tests, and deployments, ensuring integration with Azure services. Create reusable GitHub Actions templates and libraries for consistent DevOps practices. GitHub Repository Administration: Manage GitHub repositories, branching strategies, and access permissions. Implement GitHub features like Dependabot, code scanning, and security alerts to enhance code quality and security. Azure DevOps Integration: Utilize Azure Pipelines in conjunction with GitHub Actions to orchestrate complex CI/CD workflows. Configure and manage Azure services such as: Azure Kubernetes Service (AKS) for container orchestration. Azure Application Gateway and Azure Front Door for load balancing and traffic management. Azure Monitoring , Azure App Insights , and Azure KeyVault for observability, diagnostics, and secure secrets management. HELM charts and Microsoft Bicep for Infrastructure as Code. Automation & Scripting: Develop robust automation scripts using PowerShell , Bash , or Python to streamline operational tasks. Automate monitoring, deployments, and environment management workflows. Infrastructure Management: Oversee and maintain cloud environments with a focus on scalability, security, and reliability. Implement containerization strategies using Docker and orchestration via AKS . Collaboration: Partner with cross-functional teams to align DevOps practices with business objectives while maintaining compliance and security standards. Monitoring & Optimization: Deploy and maintain monitoring and logging tools to ensure system performance and uptime. Optimize pipeline execution times and infrastructure costs. Documentation & Best Practices: Document GitHub Actions workflows, CI/CD pipelines, and Azure infrastructure configurations. Advocate for best practices in version control, security, and DevOps methodologies. Qualifications Education: Bachelor's degree in Computer Science, Information Technology, or related field (preferred). Experience: 3+ years of experience in DevOps engineering with a focus on GitHub Actions and Azure DevOps tools. Proven track record of designing CI/CD workflows using GitHub Actions in production environments. Extensive experience with Azure services, including AKS, Azure Front Door, Azure Application Gateway, Azure KeyVault, Azure App Insights, and Azure Monitoring. Hands-on experience with Infrastructure as Code tools, including Microsoft Bicep and HELM charts . Technical Skills: GitHub Actions Expertise: Deep understanding of GitHub Actions, workflows, and integrations with Azure services. Scripting & Automation: Proficiency in PowerShell , Bash , and Python for creating automation scripts and custom GitHub Actions. Containerization & Orchestration: Experience with Docker and Kubernetes , including Azure Kubernetes Service (AKS). Security Best Practices: Familiarity with securing CI/CD pipelines, secrets management, and cloud environments. Monitoring & Optimization: Hands-on experience with Azure Monitoring, App Insights, and logging solutions to ensure system reliability. Soft Skills: Strong problem-solving and analytical abilities. Excellent communication and collaboration skills, with the ability to work in cross-functional and global teams. Detail-oriented with a commitment to delivering high-quality results. Preferred Qualifications Experience in DevOps practices within the financial or tax services industries. Familiarity with advanced GitHub features such as Dependabot, Security Alerts, and CodeQL. Knowledge of additional CI/CD platforms like Jenkins or CircleCI.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Hyderabad

Work from Office

ABOUT THE ROLE Role Description: We are seeking a highly skilled, hands- on Senior QA & Test Automation Engineer , will play a critical role in ensuring the accuracy, reliability, and performance of our enterprise data platforms. Reporting to the Test Automation Engineering Manager , you will work at the intersection of data engineering, quality assurance, and automation to validate complex data flows across pipelines , with a special focus on semantic layer validation and GraphQL API testing . This is a hands-on role that demands deep technical proficiency in both manual and automated data validation . You will be responsible for understanding the business logic behind data transformations, validating the flow of data through various systems, and ensuring that semantic and API layers deliver consistent and contract-compliant outputs. You will actively participate in the design and development of automation frameworks , collaborating closely with the QA Manager and engineering teams to ensure testability and maintainability are built into the system from the start. You will also contribute to test strategy, execution planning, and defect lifecycle management , working across cross-functional teams to maintain high standards for data quality. This role is ideal for someone who is passionate about data quality , has hands-on experience with ETL/ELT pipelines , is comfortable working with cloud-native data platforms (AWS, Databricks, etc.) , and has a strong grasp of testing best practices , including API schema validation, CI/CD integration, and semantic layer testing. Youll have the opportunity to shape and contribute to a modern data quality engineering practice , ensuring that downstream consumers such as analytics teams, business stakeholders, and machine learning models can fully trust the data they rely on. Roles & Responsibilities: Collaborate with the Test Automation Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines , including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks , with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs , ensuring data consistency, compliance with contracts, and alignment with business expectations. Track, manage, and report defects using tools like JIRA , ensuring proper prioritization, root cause analysis, and resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and promote shift-left testing practices. Ensure comprehensive test coverage across the data lifecycle, including data ingestion, transformation, delivery, and consumption. Participate actively in QA ceremonies (daily standups, sprint planning, retrospectives), and continuously drive improvements to QA processes and culture. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years focused on ETL/Data Pipeline Testing in cloud-native environments. Strong in SQL, Python, and optionally PySpark comfortable writing complex queries, automation scripts, and custom data validation logic. Practical experience with manual validation of data pipelines, including source-to-target testing and business rule verification. Proven ability to support, maintain, and enhance automation test suites and contribute to framework improvements in collaboration with QA leadership. Experience validating GraphQL APIs, semantic layers (e.g., Looker, dbt), and ensuring schema/data contract compliance. Familiarity with data platforms and tools such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of QA methodologies, including test planning, test case design, test data management, and defect lifecycle tracking. Proficiency in tools like JIRA, TestRail, or Zephyr for test case and defect management. Skilled in building and automating data quality checks: schema validation, null checks, duplicates, threshold alerts, and data transformation validation. Hands-on with API testing using tools like Postman, pytest, or custom automation frameworks built in Python. Experience working in Agile/Scrum environments, actively participating in QA ceremonies and sprint cycles. Exposure to integrating automated tests into CI/CD pipelines using tools like GitHub Actions, Jenkins, or GitLab CI. Education and Professional Certifications Bachelors degree in computer science and engineering preferred, other Engineering field is considered; Masters degree and 6+ years experience Or Bachelors degree and 8+ years Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

ABOUT THE ROLE Role Description: We are seeking a highly experienced and hands-on Test Automation Engineering Manager with strong leadership skills and deep expertise in Data Integration, Data Quality , and automated data validation across real-time and batch pipelines . In this strategic role, you will lead the design, development, and implementation of scalable test automation frameworks that validate data ingestion, transformation, and delivery across diverse sources into AWS-based analytics platforms , leveraging technologies like Databricks , PySpark , and cloud-native services. As a lead , you will drive the overall testing strategy, lead a team of test engineers, and collaborate cross-functionally with data engineering, platform, and product teams. Your focus will be on delivering high-confidence, production-grade data pipelines with built-in validation layers that support enterprise analytics, ML models, and reporting platforms. The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Define and drive the test automation strategy for data pipelines, ensuring alignment with enterprise data platform goals. Lead and mentor a team of data QA/test engineers, providing technical direction, career development, and performance feedback. Own delivery of automated data validation frameworks across real-time and batch data pipelines using Databricks and AWS services. Collaborate with data engineering, platform, and product teams to embed data quality checks and testability into pipeline design. Design and implement scalable validation frameworks for data ingestion, transformation, and consumption layers. Automate validations for multiple data formats including JSON, CSV, Parquet, and other structured/semi-structured file types during ingestion and transformation. Automate data testing workflows for pipelines built on Databricks/Spark, integrated with AWS services like S3, Glue, Athena, and Redshift. Establish reusable test components for schema validation, null checks, deduplication, threshold rules, and transformation logic. Integrate validation processes with CI/CD pipelines, enabling automated and event-driven testing across the development lifecycle. Drive the selection and adoption of tools/frameworks that improve automation, scalability, and test efficiency. Oversee testing of data visualizations in Tableau, Power BI, or custom dashboards, ensuring backend accuracy via UI and data-layer validations. Ensure accuracy of API-driven data services, managing functional and regression testing via Postman, Python, or other automation tools. Track test coverage, quality metrics, and defect trends, providing regular reporting to leadership and ensuring continuous improvement. establishing alerting and reporting mechanisms for test failures, data anomalies, and governance violations. Contribute to system architecture and design discussions, bringing a strong quality and testability lens early into the development lifecycle. Lead test automation initiatives by implementing best practices and scalable frameworks, embedding test suites into CI/CD pipelines to enable automated, continuous validation of data workflows, catalog changes, and visualization updates Mentor and guide QA engineers, fostering a collaborative, growth-oriented culture focused on continuous learning and technical excellence. Collaborate cross-functionally with product managers, developers, and DevOps to align quality efforts with business goals and release timelines. Conduct code reviews, test plan reviews, and pair-testing sessions to ensure team-level consistency and high-quality standards. Must-Have Skills: Hands-on experience with Databricks and Apache Spark for building and validating scalable data pipelines Strong expertise in AWS services including S3, Glue, Athena, Redshift, and Lake Formation Proficient in Python, PySpark, and SQL for developing test automation and validation logic Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro In-depth understanding of data integration workflows including batch and real-time (streaming) pipelines Strong ability to define and automate data quality checks : schema validation, null checks, duplicates, thresholds, and transformation validation Experience designing modular, reusable automation frameworks for large-scale data validation Skilled in integrating tests with CI/CD tools like GitHub Actions , Jenkins , or Azure DevOps Familiarity with orchestration tools such as Apache Airflow , Databricks Jobs , or AWS Step Functions Hands-on experience with API testing using Postman , pytest , or custom automation scripts Proven track record of leading and mentoring QA/test engineering teams Ability to define and own test automation strategy and roadmap for data platforms Strong collaboration skills to work with engineering, product, and data teams Excellent communication skills for presenting test results, quality metrics , and project health to leadership Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Understanding of DataOps methodologies and practices Familiarity with monitoring/observability tools such as Datadog, Prometheus, or CloudWatch Experience building or maintainingtest data generators Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 month ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Role Description: We are seeking a highly skilled, hands-on Senior QA & Test Automation Specialist (Test Automation Engineer)with strong experience in data validation , ETL testing , test automation , and QA process ownership . This role combines deep technical execution with a solid foundation in QA best practices including test planning, defect tracking, and test lifecycle management . You will be responsible for designing and executing manual and automated test strategies for complex real-time and batch data pipelines , contributing to the design of automation frameworks , and ensuring high-quality data delivery across our AWS and Databricks-based analytics platforms . The role is highly technical and hands-on , with a strong focus on automation, metadata validation , and ensuring data governance practices are seamlessly integrated into development pipelines. Roles & Responsibilities: Collaborate with the QA Manager to design and implement end-to-end test strategies for data validation, semantic layer testing, and GraphQL API validation. Perform manual validation of data pipelines, including source-to-target data mapping, transformation logic, and business rule verification. Develop and maintain automated data validation scripts using Python and PySpark for both real-time and batch pipelines. Contribute to the design and enhancement of reusable automation frameworks, with components for schema validation, data reconciliation, and anomaly detection. Validate semantic layers (e.g., Looker, dbt models) and GraphQL APIs, ensuring data consistency, compliance with contracts, and alignment with business expectations. Write and manage test plans, test cases, and test data for structured, semi-structured, and unstructured data. Track, manage, and report defects using tools like JIRA, ensuring thorough root cause analysis and timely resolution. Collaborate with Data Engineers, Product Managers, and DevOps teams to integrate tests into CI/CD pipelines and enable shift-left testing practices. Ensure comprehensive test coverage for all aspects of the data lifecycle, including ingestion, transformation, delivery, and consumption. Participate in QA ceremonies (standups, planning, retrospectives) and continuously contribute to improving the QA process and culture. Experience building or maintaining test data generators Contributions to internal quality dashboards or data observability systems Awareness of metadata-driven testing approaches and lineage-based validations Experience working with agile Testing methodologies such as Scaled Agile. Familiarity with automated testing frameworks like Selenium, JUnit, TestNG, or PyTest. Must-Have Skills: 69 years of experience in QA roles, with at least 3+ years of strong exposure to data pipeline testing and ETL validation. Strong in SQL, Python, and optionally PySpark comfortable with writing complex queries and validation scripts. Practical experience with manual validation of data pipelines and source-to-target testing. Experience in validating GraphQL APIs, semantic layers (Looker, dbt, etc.), and schema/data contract compliance. Familiarity with data integration tools and platforms such as Databricks, AWS Glue, Redshift, Athena, or BigQuery. Strong understanding of test planning, defect tracking, bug lifecycle management, and QA documentation. Experience working in Agile/Scrum environments with standard QA processes. Knowledge of test case and defect management tools (e.g., JIRA, TestRail, Zephyr). Strong understanding of QA methodologies, test planning, test case design, and defect lifecycle management. Deep hands-on expertise in SQL, Python, and PySpark for testing and automating validation. Proven experience in manual and automated testing of batch and real-time data pipelines. Familiarity with data processing and analytics stacks: Databricks, Spark, AWS (Glue, S3, Athena, Redshift). Experience with bug tracking and test management tools like JIRA, TestRail, or Zephyr. Ability to troubleshoot data issues independently and collaborate with engineering for root cause analysis. Experience integrating automated tests into CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience validating data from various file formats such as JSON, CSV, Parquet, and Avro Strong ability to validate and automate data quality checks: schema validation, null checks, duplicates, thresholds, and transformation validation Hands-on experience with API testing using Postman, pytest, or custom automation scripts Good-to-Have Skills: Experience with data governance tools such as Apache Atlas, Collibra, or Alation Familiarity with monitoring/observability tools such as Datadog, Prometheus, or Cloud Watch Education and Professional Certifications Bachelors/Masters degree in computer science and engineering preferred. Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

What you will do Amgen India is a key contributor to the companys global digital transformation initiatives, delivering secure, compliant, and user-friendly digital solutions. As part of the Global Medical Technology Medical Information team, you will play a critical role in ensuring quality, compliance, and performance across Amgens Medical Information platforms In this role, you will be responsible for ensuring the quality and compliance of enterprise applications, with a strong focus on test planning, execution, defect management, and test automation. You will work closely with product owners, developers, and QA teams to validate systems against business and regulatory requirements. Your expertise in automation testing, along with manual testing knowledge, will contribute to delivering robust solutions across platforms such as Medical Information Websites and other validated systems. This role is critical in ensuring that applications meet performance, security, and compliance expectations across the development lifecycle, especially in GxP-regulated environments. Roles & Responsibilities: Design and develop manual and automated test cases based on functional and non-functional requirements. Create and maintain the Requirement Traceability Matrix (RTM) to ensure full test coverage. Assist in User Acceptance Testing (UAT), working closely with business stakeholders. Set up test data based on preconditions identified in test cases. Execute tests including Dry Runs, Operational Qualification (OQ), and regression testing. Capture test evidence and document test results in accordance with validation standards. Log and track defects using tools such as JIRA or HP ALM, ensuring clear documentation and traceability. Ensure defects are re-tested post-fix and verified for closure in collaboration with development teams. Execute test scripts provided by analysts, focusing on accuracy and completeness of testing. Collaborate with developers to ensure all identified bugs and issues are addressed and resolved effectively. Support automation testing efforts by building and maintaining scripts in tools like Selenium, TestNG, or similar. Maintain detailed documentation of test cycles, supporting audit readiness and compliance. Ability to perform responsibilities in compliance with GxP and Computer System Validation (CSV) regulations, ensuring proper documentation, execution, and adherence to regulatory standards. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Masters degree and 1 to 3 years of Computer Science, IT or related field experience OR Bachelors degree and 3 to 5 years of Computer Science, IT or related field experience OR Diploma and 7 to 9 years of Computer Science, IT or related field experience Preferred Qualifications: BS/MS in Computer Science, Information Systems, or related field 3+ years of experience in application testing and automation frameworks Strong knowledge of test lifecycle documentation including test plans, RTMs, OQ protocols, and summary reports Hands-on experience with Selenium, TestNG, Postman, JMeter, or similar tools Experience in validated (GxP) environments and with CSV practices Familiarity with Agile/Scrum methodologies and continuous testing practices Experience working in cross-functional teams, especially in healthcare or pharmaceutical domains Knowledge of test management and defect tracking tools (e.g., HP ALM, JIRA, qTest) Must-Have Skills: Strong experience in designing and executing both manual and automated test cases based on functional and non-functional requirements. Hands-on expertise in building and maintaining automated functional, regression, and integration tests scripts using tools such as Selenium, TestNG, Postman, or similar, enabling scalable and reusable test automation. Proficient in using tools like JIRA, HP ALM, or qTest for defect logging, tracking, test execution, and traceability, ensuring audit readiness and compliance with validation standards. Familiarity with Jenkins, GitHub Actions, or similar tools for integrating testing into DevOps pipelines Develop, enhance, and maintain test automation scripts using data-driven, keyword-driven, or hybrid frameworks, enabling reusable, scalable, and maintainable test coverage across Salesforce and integrated systems. Proficient in creating test lifecycle documentation (RTMs, OQ protocols, test plans, reports) and supporting end-to-end testing within regulated GxP and CSV-compliant computer systems, ensuring adherence to validation, audit, and documentation standards Good-to-Have Skills: Familiarity with performance testing (e.g., JMeter) and API testing using tools like Postman to validate service-level functionality and performance. Experience working within Agile or Scrum frameworks and supporting continuous testing practices across sprint cycles and iterative releases. Proven ability to work effectively with cross-functional teams including developers, QA, and business stakeholders to support UAT, validate fixes, and ensure issue resolution throughout the development lifecycle. Background in testing applications in the healthcare or pharmaceutical industry, with an understanding of compliance, patient safety, and regulatory constraints. Ability to manage test data setup and maintain thorough documentation of test evidence to support compliance, audit preparation, and traceability. Experience collaborating with cross-functional teams, particularly within healthcare or pharmaceutical environments. Professional Certifications ISTQB Foundation or Advanced Level Certification Certified Automation Tester (e.g., Selenium WebDriver Certification) SAFe Agile or Scrum Certification (optional) Certified CSV/Validation Professional (optional) Soft Skills: Strong attention to detail and documentation quality Excellent analytical and problem-solving skills Good verbal and written communication skills Ability to work both independently and within a team High accountability and ownership of assigned tasks Adaptability to fast-paced and evolving environments Strong collaboration skills across functional and technical teams

Posted 1 month ago

Apply

0.0 - 3.0 years

3 - 6 Lacs

Hyderabad

Work from Office

The ideal candidate will have a deep understanding of automation, configuration management, and infrastructure-as-code principles, with a strong focus on Ansible. You will work closely with developers, system administrators, and other collaborators to automate infrastructure related processes, improve deployment pipelines, and ensure consistent configurations across multiple environments. The Infrastructure Automation Engineer will be responsible for developing innovative self-service solutions for our global workforce and further enhancing our self-service automation built using Ansible. As part of a scaled Agile product delivery team, the Developer works closely with product feature owners, project collaborators, operational support teams, peer developers and testers to develop solutions to enhance self-service capabilities and solve business problems by identifying requirements, conducting feasibility analysis, proof of concepts and design sessions. The Developer serves as a subject matter expert on the design, integration and operability of solutions to support innovation initiatives with business partners and shared services technology teams. Please note, this is an onsite role based in Hyderabad. Key Responsibilities: Automating repetitive IT tasks - Collaborate with multi-functional teams to gather requirements and build automation solutions for infrastructure provisioning, configuration management, and software deployment. Configuration Management - Design, implement, and maintain code including Ansible playbooks, roles, and inventories for automating system configurations and deployments and ensuring consistency Ensure the scalability, reliability, and security of automated solutions. Troubleshoot and resolve issues related to automation scripts, infrastructure, and deployments. Perform infrastructure automation assessments, implementations, providing solutions to increase efficiency, repeatability, and consistency. DevOps Facilite continuous integration and deployment (CI/CD) Orchestration Coordinating multiple automated tasks across systems Develop and maintain clear, reusable, and version-controlled playbooks and scripts. Manage and optimize cloud infrastructure using Ansible and terraform automation (AWS, Azure, GCP, etc.). Continuously improve automation workflows and practices to enhance speed, quality, and reliability. Ensure that infrastructure automation adheres to best practices, security standards, and regulatory requirements. Document and maintain processes, configurations, and changes in the automation infrastructure. Participate in design review, client requirements sessions and development teams to deliver features and capabilities supporting automation initiatives Collaborate with product owners, collaborators, testers and other developers to understand, estimate, prioritize and implement solutions Design, code, debug, document, deploy and maintain solutions in a highly efficient and effective manner Participate in problem analysis, code review, and system design Remain current on new technology and apply innovation to improve functionality Collaborate closely with collaborators and team members to configure, improve and maintain current applications Work directly with users to resolve support issues within product team responsibilities Monitor health, performance and usage of developed solutions What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Bachelors degree and 0 to 3 years of computer science, IT, or related field experience OR Diploma and 4 to 7 years of computer science, IT, or related field experience Deep hands-on experience with Ansible including playbooks, roles, and modules Proven experience as an Ansible Engineer or in a similar automation role Scripting skills in Python, Bash, or other programming languages Proficiency expertise in Terraform & CloudFormation for AWS infrastructure automation Experience with other configuration management tools (e.g., Puppet, Chef). Experience with Linux administration, scripting (Python, Bash), and CI/CD tools (GitHub Actions, CodePipeline, etc.) Familiarity with monitoring tools (e.g., Dynatrace, Prometheus, Nagios) Working in an Agile (SAFe, Scrum, and Kanban) environment Preferred Qualifications: Red Hat Certified Specialist in Developing with Ansible Automation Platform Red Hat Certified Specialist in Managing Automation with Ansible Automation Platform Red Hat Certified System Administrator AWS Certified Solutions Architect Associate or Professional AWS Certified DevOps Engineer Professional Terraform Associate Certification Good-to-Have Skills: Experience with Kubernetes (EKS) and service mesh architectures. Knowledge of AWS Lambda and event-driven architectures. Familiarity with AWS CDK, Ansible, or Packer for cloud automation. Exposure to multi-cloud environments (Azure, GCP) Experience operating within a validated systems environment (FDA, European Agency for the Evaluation of Medicinal Products, Ministry of Health, etc.) Soft Skills: Strong analytical and problem-solving skills. Effective communication and collaboration with multi-functional teams. Ability to work in a fast-paced, cloud-first environment. Shift Information: This position is an onsite role and may require working during later hours to align with business hours. Candidates must be willing and able to work outside of standard hours as required to meet business needs.

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad

Work from Office

Position Summary The F5 NGINX Business Unit is seeking a Devops Software Engineer III based in India. As a Devops engineer, you will be an integral part of a development team delivering high-quality features for exciting next generation NGINX SaaS products. In this position, you will play a key role in building automation, standardization, operations support, and tools to implement and support world-class products; design, build, and maintain infrastructure, services and tools used by our developers, testers and CI/CD pipelines. You will champion efforts to improve reliability and efficiency in these environments and explore and lead efforts towards new strategies and architectures for pipeline services, infrastructure, and tooling. When necessary, you are comfortable wearing a developer hat to build a solution. You are passionate about automation and tools. You'll be expected to handle most development tasks independently, with minimal direct supervision. Primary Responsibilities Collaborate with a globally distributed team to design, build, and maintain tools, services, and infrastructure that support product development, testing, and CI/CD pipelines for SaaS applications hosted on public cloud platforms. Ensure Devops infrastructure and services maintain the required level of availability, reliability, scalability, and performance. Diagnose and resolve complex operational challenges involving network, security, and web technologies. This includes troubleshooting problems with HTTP load balancers, API gateways (e.g., NGINX proxies), and related systems. Take part in product support, bug triaging, and bug-fixing activities on a rotating schedule to ensure the SaaS service meets its SLA commitments. Consistently apply forward-thinking concepts relating to automation and CI/CD processes. Skills Experience with deploying infrastructure and services in one or more cloud environments such as AWS, Azure, Google Cloud. Experience with configuration management and deployment automation tools, such as Terraform, Ansible, Packer. Experience with Observability platforms like Grafana, Elastic Stack etc. Experience with source control and CI/CD tools like git, Gitlab CI, Github Actions, AWS Code Pipeline etc. Proficiency in scripting languages such as Python and Bash. Solid understanding of Unix OS Familiarity or experience with container orchestration technologies such as Docker and Kubernetes. Good understanding of computer networking (e.g., DNS, DHCP, TCP, IPv4/v6) Experience with network service technologies (e.g., HTTP, gRPC, TLS, REST APIs, OpenTelemetry). Qualifications Bachelors or advanced degree; and/or equivalent work experience. 5+ years of experience in relevant roles.

Posted 1 month ago

Apply

8.0 - 10.0 years

0 Lacs

Hyderabad

Work from Office

Shift timings: US Shift (6:00 PM to 4:00 AM IST) Purpose We are looking for a seasoned Senior QA Engineer with a strong foundation in both manual and automation testing . The ideal candidate will be highly detail-oriented, capable of understanding business requirements, and able to contribute to product quality through both hands-on testing and thoughtful feedback. You will work closely with product managers and developers to ensure high-quality releases by creating effective test cases, executing comprehensive test plans, and implementing automation for key workflows. Strong communication skills are essential, as you'll be responsible for clearly reporting QA status, identifying issues early, and proposing areas of improvement in the product. Key Responsibilities Analyze business requirements and functional specs to create test cases and identify potential risks or improvements. Perform manual testing for feature validation, regression testing, and exploratory testing. Design, develop, and maintain automated test scripts using tools such as Selenium Participate in all phases of the software development lifecycle to ensure quality is embedded throughout the process. Collaborate with developers and product owners to reproduce and debug issues, ensuring proper resolution. Execute tests in CI/CD pipelines and maintain automation scripts in branching workflows. Provide clear and concise QA status updates using metrics and dashboards for each release. Document test results, defects, and maintain accurate test records. Suggest functional and UX improvements based on user flows and business context. Required Qualifications 8+ years of experience in software quality assurance , with a solid mix of manual and automation testing . Proficiency in automated testing frameworks using Selenium. Experience working with version control and CI tools (e.g., Git, Jenkins, GitHub Actions). Ability to analyze business flows , ask the right questions, and identify edge cases. Strong experience in functional, regression, and integration testing . Clear understanding of test planning, test case design , and bug lifecycle management . Excellent written and verbal communication skills , with an ability to convey quality issues clearly and constructively. Strong attention to detail and commitment to product quality.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies