Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
6 - 9 Lacs
Mumbai, New Delhi, Bengaluru
Work from Office
Experience : 5 + years Expected Notice Period : 30 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote,New Delhi,Bengaluru,Mumbai We are seeking a seasoned DevOps Architect / Senior Engineer with deep expertise in AWS, EKS, Terraform, Infrastructure as Code, and MongoDB Atlas to lead the design, implementation, and management of our cloud-native infrastructure. This is a hands-on leadership role focused on ensuring the scalability, reliability, security, and efficiency of our production-grade systems. Key Responsibilities : Cloud Infrastructure Design & Management (AWS) Architect, build, and manage secure, scalable AWS infrastructure (VPC, EC2, S3, IAM, Security Groups). Implement secure cloud networking and ensure high availability. Monitor, optimize, and troubleshoot AWS environments. Container Orchestration (AWS EKS) Deploy and manage production-ready EKS clusters, including workload deployments, scaling (manual and via Karpenter), monitoring, and security. Maintain CI/CD pipelines for Kubernetes applications. Infrastructure as Code (IaC) Lead development of Terraform-based IaC modules (clean, reusable, and secure). Manage Terraform state and promote best practices (modularization, code reviews). Extend IaC to multi-cloud (Azure, GCP) and leverage CloudFormation or Bicep when needed. Programming, Automation & APIs Develop automation scripts using Python, Bash, or PowerShell. Design, secure, and manage APIs (AWS API Gateway, optionally Azure API Management). Integrate systems/services via APIs and event-driven architecture. Troubleshoot and resolve infrastructure or deployment issues. Database Management Administer MongoDB Atlas: setup, configuration, performance tuning, backup, and security. Implement best practices for high availability and resilience. DevOps Leadership & Strategy Define and promote DevOps best practices across the organization. Automate and streamline development-to-deployment workflows. Mentor junior engineers and foster a culture of technical excellence. Stay ahead of emerging DevOps and Cloud trends. Mandatory Skills : Cloud Administration (AWS) VPC design (subnets, route tables, NAT/IGW, peering). IAM (users, roles, policies with least-privilege enforcement). Deep AWS service knowledge and administrative experience. Container Orchestration (AWS EKS) EKS production-grade cluster setup and upgrades. Workload autoscaling using Karpenter. Logging/Monitoring via Prometheus, Grafana, CloudWatch. Secure EKS practices: RBAC, PSP/PSA, admission controllers, secret management. CI/CD & Kubernetes Experience with Jenkins, GitLab CI, ArgoCD, Flux. Microservices deployment and Kubernetes cluster federation knowledge. Infrastructure as Code Expert in Terraform (HCL, modules, backends, security). Familiarity with CloudFormation, Bicep for cross-cloud support. Git-based version control and CI/CD integration. Automated infrastructure provisioning. Programming & API Proficient in Python, Bash, PowerShell. Secure API design, development, and management. Database Management Proven MongoDB Atlas administration: scaling, backups, alerts, and performance monitoring. Good to Have Skills : Infrastructure & OS Server & Virtualization Management (Linux/Windows). OS Security Hardening & Automation. Disaster Recovery planning and implementation. Docker containerization. Networking & Security Advanced networking (DNS, BGP, routing). Software Defined Networking (SDN), hybrid networking. Zero Trust Architecture. Load balancer (ALB/ELB/NLB) security and WAF management. Compliance: ISO 27001, SOC 2, PCI-DSS. Secrets management (Vault, AWS Secrets Manager). Observability & Automation OpenTelemetry, LangTrace for observability. AI-powered automation (e.g., CrewAI). SIEM/Security monitoring. Cloud Governance Cost optimization strategies. AWS Well-Architected Framework familiarity. Incident response, governance, and compliance management. Qualifications & Experience Bachelor's degree in Computer Science, Engineering, or equivalent practical experience. 5+ years in DevOps / SRE / Cloud Engineering with AWS focus. 5+ years hands-on experience with EKS and Terraform. Proven experience with cloud-native architecture and automation. AWS Certifications (DevOps Engineer Pro, Solutions Architect Pro) preferred. Agile/Scrum experience a plus.
Posted 1 month ago
5.0 - 9.0 years
20 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
-Design, develop & maintain data pipelines using GCP services: Dataflow, BigQuery, and Pub/Sub -Provisioning infrastructure on GCP using IaC with Terraform -Implement & manage data warehouse solutions -Monitor and resolve issues in data workflows Required Candidate profile -Expertise in GCP, Apache Beam, Dataflow, & BigQuery -Pro in Python, SQL, PySpark -Worked with Cloud Composer for orchestration -Solid understanding of DWH, ETL pipelines, and real-time data streaming
Posted 1 month ago
3.0 - 7.0 years
15 - 22 Lacs
Gurugram, Bengaluru
Work from Office
Role & responsibilities Design, implement, and maintain AWS infrastructure using Terraform (IaC Infrastructure as Code). Manage CI/CD pipelines and automate operational tasks using tools like Jenkins, GitHub Actions, or CodePipeline. Monitor infrastructure health using CloudWatch, Prometheus, Grafana, etc., and handle alerting with PagerDuty or similar tools. Implement and maintain backup, disaster recovery, and high availability strategies in AWS. Manage VPCs, subnets, routing, security groups, and IAM roles and policies. Perform cost optimization and rightsizing of AWS resources. Ensure security compliance and apply cloud security best practices (e.g., encryption, access control). Collaborate with development and security teams to support application deployment and governance. Preferred candidate profile 3+ years of hands-on experience in AWS Cloud (EC2, S3, IAM, RDS, Lambda, EKS/ECS, VPC, etc.). 2+ years experience with Terraform and strong understanding of IaC principles. Hands-on experience with Linux system administration and scripting (Bash, Python). Experience with DevOps tools such as Git, Docker, Jenkins, or similar. Proficiency in monitoring/logging tools like CloudWatch, ELK stack, Datadog, or New Relic. Familiarity with incident management, change management, and postmortem analysis processes. Knowledge of networking, DNS, TLS/SSL, firewalls, and cloud security concepts.
Posted 1 month ago
3.0 - 8.0 years
14 - 16 Lacs
Thane, Navi Mumbai, Mumbai (All Areas)
Hybrid
Role & responsibilities Bachelors degree in computer science, engineering, or related field. Strong experience in designing, deploying, and managing Microsoft Azure infrastructure. Proficiency in Azure services such as Azure Virtual Machines, Azure Networking, Azure Storage, Azure Active Directory, etc. Hands-on experience with Azure Resource Manager (ARM) templates and Infrastructure as Code (IaC) tools like Terraform or Azure Resource Manager. Knowledge of networking concepts including TCP/IP, DNS, VPN, and firewalls. Knowledge of the main operating systems (Windows, Linux) Familiarity with cloud security best practices and compliance standards. Knowledge of one or more scripting languages such as Bash, PowerShell, Python, etc. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration abilities. Fluent in written English, Good in oral
Posted 1 month ago
6.0 - 11.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 6 to 11+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
5.0 - 10.0 years
15 - 27 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Need someone with strong Python and AWS automation experience , ideally with migration backgrounds or DevOps orchestration experience Role & responsibilities Engage with customers application and infrastructure teams to understand existing runbooks and migration steps. Design, develop, test, and deploy automation scripts for each migration workflow using Python and Terraform. Implement AWS Lambda-based orchestration and integrate with Step Functions, EventBridge, and other AWS-native tools. Translate manual activities into reusable automation modules, progressively replacing stepwise manual efforts with Lambda-driven automation. Collaborate during cutover planning to identify automatable activities and convert them into secure, scalable workflows. Handle diverse workloads spanning across geographies with an emphasis on reliability and compliance.
Posted 1 month ago
10.0 - 15.0 years
15 - 30 Lacs
Thiruvananthapuram
Work from Office
Job Summary: We are seeking an experienced DevOps Architect to drive the design, implementation, and management of scalable, secure, and highly available infrastructure. The ideal candidate should have deep expertise in DevOps practices, CI/CD pipelines, cloud platforms, and infrastructure automation across multiple cloud environments along with strong leadership and mentoring capabilities. Job Duties and Responsibilities Lead and manage the DevOps team to ensure reliable infrastructure and automated deployment processes. Design, implement, and maintain highly available, scalable, and secure cloud infrastructure (AWS, Azure, GCP, etc.). Develop and optimize CI/CD pipelines for multiple applications and environments. Drive Infrastructure as Code (IaC) practices using tools like Terraform, CloudFormation, or Ansible. Oversee monitoring, logging, and alerting solutions to ensure system health and performance. Collaborate with Development, QA, and Security teams to integrate DevOps best practices across the SDLC. Lead incident management and root cause analysis for production issues. Ensure robust security practices for infrastructure and pipelines (secrets management, vulnerability scanning, etc.). Guide and mentor team members, fostering a culture of continuous improvement and technical excellence. Evaluate and recommend new tools, technologies, and processes to improve operational efficiency. Required Qualifications Education Bachelor's degree in Computer Science, IT, or related field; Master's preferred At least two current cloud certifications (e.g., AWS Solutions Architect, Azure Administrator, GCP DevOps Engineer, CKA, Terraform etc.) Experience: 10+ years of relevant experience in DevOps, Infrastructure, or Cloud Operations. 5+ years of experience in a technical leadership or team lead role. Knowledge, Skills & Abilities Expertise in at least two major cloud platform: AWS , Azure , or GCP . Strong experience with CI/CD tools such as Jenkins, GitLab CI, Azure DevOps, or similar. Hands-on experience with Infrastructure as Code (IaC) tools like Terraform, Ansible, or CloudFormation. Proficient in containerization and orchestration using Docker and Kubernetes . Strong knowledge of monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, CloudWatch). Scripting knowledge in languages like Python , Bash , or Go . Solid understanding of networking, security, and system administration. Experience in implementing security best practices across DevOps pipelines. Proven ability to mentor, coach, and lead technical teams. Preferred Skills Experience with serverless architecture and microservices deployment. Experience with security tools and best practices (e.g., IAM, VPNs, firewalls, cloud security posture management ). Exposure to hybrid cloud or multi-cloud environments. Knowledge of cost optimization and cloud governance strategies. Experience working in Agile teams and managing infrastructure in production-grade environments Relevant certifications (AWS Certified DevOps Engineer, Azure DevOps Expert, CKA, etc.). Working Conditions Work Arrangement: An occasionally hybrid opportunity based out of our Trivandrum office. Travel Requirements: Occasional travel may be required for team meetings, user research, or conferences. On-Call Requirements: Light on-call rotation may be required depending on operational needs. Hours of Work: Monday to Friday, 40 hours per week, with overlap with PST required as needed. Living AOT s Values Our values guide how we work, collaborate, and grow as a team. Every role at AOT is expected to embody and promote these values: Innovation: We pursue true innovation by solving problems and meeting unarticulated needs. Integrity: We hold ourselves to high ethical standards and never compromise. Ownership: We are all responsible for our shared long-term success. Agility: We stay ready to adapt to change and deliver results. Collaboration: We believe collaboration and knowledge-sharing fuel innovation and success. Empowerment: We support our people so they can bring the best of themselves to work every day.
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Job Description: Required Skills & Experience • 5 to 12 years of experience in DevOps with a strong focus on Azure. • Hands-on experience with Azure networking, VNET integration, service principle and firewall rules. • Hands-on experience on creating architectures and deployment diagram. • Expertise in CI/CD pipeline development for Databricks and ML models using Azure DevOps, Terraform, or GitHub Actions. • Familiar with Databricks Asset Bundles (DAB) for packaging and deployment. • Proficiency in RBAC, Unity Catalog, and workspace access control. • Experience with Infrastructure as Code (IaC) tools like Terraform, ARM Templates. • Strong scripting skills in Python, Bash, or PowerShell. • Strong experience with monitoring tools (Azure Monitor, Prometheus, or Datadog). Roles & Responsibilities We are seeking a Senior/Lead DevOps Engineer Databricks with strong experience in Azure Databricks to design, implement, and optimize infrastructure, CI/CD pipelines, and ML model deployment. The ideal candidate will be responsible for Azure environment setup, networking, cluster management, access control, CI/CD automation, model deployment, asset bundle management, and monitoring. Leading team of 3-4 DevOps engineers. Should be able to create flow, deployment and architecture diagram. Understanding customer requirement and sharing with team members. This role requires hands-on experience with DevOps best practices, infrastructure automation, and cloud-native architectures.
Posted 1 month ago
5.0 - 10.0 years
20 - 35 Lacs
Pune, Bengaluru, Delhi / NCR
Work from Office
Job Description: Required Skills & Experience: 5+ years of experience in Data Platform Administration, with 2+ years specifically on Databricks. Strong hands-on experience with Databricks on Azure. Strong knowledge of Spark, Delta Lake, and notebook lifecycle management. Proficient in scripting languages (Python, Bash, PowerShell). Experience with CI/CD tools (e.g., Azure DevOps, GitHub Actions, Jenkins). Familiarity with data governance, RBAC, and security practices in Databricks. Excellent problem-solving, communication, and leadership skills. Preferred Qualifications: Databricks Certified Associate/Professional Administrator or Data Engineer. Experience with infrastructure-as-code (Terraform, ARM templates). Exposure to MLOps and integration with MLflow. Experience managing multi-tenant or enterprise-scale Databricks environments. Roles & Responsibilities: Platform Administration: Install, configure, and maintain Databricks workspaces. Manage user access, cluster policies, pools, libraries, and workspace configurations. Monitor performance, perform tuning, and resolve operational issues. Oversee job scheduling, automation, and alerts. Team Leadership: Lead a team of junior administrators and engineers in delivering high-performance Databricks solutions. Assign and track tasks, ensuring project milestones and SLA targets are met. Mentor team members and promote best practices in Databricks administration and data governance. Operational Support: Respond to incidents and service requests related to Databricks. Coordinate with cloud platform teams (Azure) for networking, security, and integration requirements. Develop documentation for platform usage, incident handling, and standard operating procedures. Optimization & Innovation: Analyze usage patterns and recommend cost optimization strategies. Research and implement new Databricks features and updates. Propose automation and CI/CD practices for efficient platform management.
Posted 1 month ago
3.0 - 6.0 years
13 - 14 Lacs
Bengaluru
Hybrid
Hi all , we are looking for a role Snowflake Azure/Aws Admin experience : 3 - 6 years notice period : Immediate - 15 days location : Bengaluru "Requirement for Snowflake, Azure/AWS admin, DevOps resource for DTNA. Sharing the JD below: • Azure Adminitstration- Azure resource deployment/management using IaC bicep or Terraform. Azure Networking. IAM. • AWS Administration- AWS resource deployment/management using IaC Terraform, AWS CLI, Cloud Formation. AWS Networking. IAM. • Azure DevOps- Automation of IaC using Azure Pipelines. Collaborate with development teams to support (CI/CD) processes. • Bicep- For creating IaC code for Azure resources. • Terraform- For creating IaC code for Azure/AWS resources. • Git Actions- Automation of IaC using Git Actions. • Snowflake- Management of snowflake objects like Database, Table, Tasks, Integrations, Pipes etc, access management. • Schemachange- For creating IaC code for deploying/managing Snowflake objects. • Matillion- Administration of Matillion Designer. Management of Matillion agents."
Posted 1 month ago
7.0 - 12.0 years
11 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Devops Engineer to join our dynamic development team. In this role, you will be responsible for designing, developing, and maintaining both frontend and backend components of our applications using Devops and associated technologies. You will collaborate with cross-functional teams to deliver robust, scalable, and high-performing software solutions that meet our business needs. The ideal candidate will have a strong background in devops, experience with modern frontend frameworks, and a passion for full-stack development. Requirements : Bachelor's degree in Computer Science Engineering, or a related field. 7 to 12+ years of experience in full-stack development, with a strong focus on DevOps. DevOps with AWS Data Engineer - Roles & Responsibilities: Use AWS services like EC2, VPC, S3, IAM, RDS, and Route 53. Automate infrastructure using Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation . Build and maintain CI/CD pipelines using tools AWS CodePipeline, Jenkins,GitLab CI/CD. Cross-Functional Collaboration Automate build, test, and deployment processes for Java applications. Use Ansible , Chef , or AWS Systems Manager for managing configurations across environments. Containerize Java apps using Docker . Deploy and manage containers using Amazon ECS , EKS (Kubernetes) , or Fargate . Monitoring & Logging using Amazon CloudWatch,Prometheus + Grafana,E Stack (Elasticsearch, Logstash, Kibana),AWS X-Ray for distributed tracing manage access with IAM roles/policies . Use AWS Secrets Manager / Parameter Store for managing credentials. Enforce security best practices , encryption, and audits. Automate backups for databases and services using AWS Backup , RDS Snapshots , and S3 lifecycle rules . Implement Disaster Recovery (DR) strategies. Work closely with development teams to integrate DevOps practices. Document pipelines, architecture, and troubleshooting runbooks. Monitor and optimize AWS resource usage. Use AWS Cost Explorer , Budgets , and Savings Plans . Must-Have Skills: Experience working on Linux-based infrastructure. Excellent understanding of Ruby, Python, Perl, and Java . Configuration and managing databases such as MySQL, Mongo. Excellent troubleshooting. Selecting and deploying appropriate CI/CD tools Working knowledge of various tools, open-source technologies, and cloud services. Awareness of critical concepts in DevOps and Agile principles. Managing stakeholders and external interfaces. Setting up tools and required infrastructure. Defining and setting development, testing, release, update, and support processes for DevOps operation. Have the technical skills to review, verify, and validate the software code developed in the project. Interview Mode : F2F for who are residing in Hyderabad / Zoom for other states Location : 43/A, MLA Colony,Road no 12, Banjara Hills, 500034 Time : 2 - 4pm
Posted 1 month ago
8.0 - 13.0 years
15 - 30 Lacs
Hyderabad
Work from Office
AWS Kubernetes Engineer Job Summary: The AWS Kubernetes Engineer will be responsible for designing, implementing, and managing highly scalable, reliable, and secure cloud-native applications and infrastructure on Amazon Web Services (AWS) using Kubernetes. This role requires deep expertise in containerization, orchestration, cloud computing, and DevOps practices to build and maintain automated, production-grade environments for critical applications. Key Responsibilities: Kubernetes Cluster Management & Orchestration: Design, deploy, and manage highly available and fault-tolerant Kubernetes clusters on AWS using services like Amazon Elastic Kubernetes Service (EKS) or self-managed Kubernetes. Implement and optimize container orchestration strategies, including deployment management, scaling, auto-healing, and resource allocation. Manage Kubernetes components such as API server, scheduler, controller manager, kubelet, and etcd, ensuring their health and performance. Utilize Kubernetes networking (CNI, service mesh like Istio or App Mesh) and storage solutions (CSI, EBS, EFS) for containerized applications. AWS Cloud Infrastructure Development: Provision, configure, and manage AWS cloud resources (e.g., EC2, VPC, ALB, RDS, S3, IAM, CloudWatch) to support Kubernetes deployments and application infrastructure. Implement Infrastructure as Code (IaC) using tools like Terraform or AWS CloudFormation to automate the provisioning and management of cloud resources. Ensure secure and compliant cloud environments by implementing best practices for network security, access control, and data encryption. CI/CD & DevOps Automation: Design, implement, and maintain robust Continuous Integration and Continuous Delivery (CI/CD) pipelines for containerized applications using tools like Jenkins, GitLab CI/CD, GitHub Actions, or AWS CodePipeline/CodeBuild. Automate deployment, testing, and release processes to ensure rapid and reliable software delivery. Develop and maintain scripts (e.g., Python, Bash) and automation tools to streamline operational tasks and improve efficiency. Monitoring, Logging & Troubleshooting: Implement comprehensive monitoring and alerting solutions for Kubernetes clusters and applications using tools such as Prometheus, Grafana, Datadog, or AWS CloudWatch Container Insights. Establish centralized logging solutions (e.g., ELK Stack, Fluentd, Sumo Logic) for effective troubleshooting and operational visibility. Proactively identify and resolve complex infrastructure and application issues within Kubernetes and AWS environments, including performance bottlenecks and stability concerns. Security & Compliance: Implement security best practices for Kubernetes (e.g., network policies, RBAC, pod security policies, secrets management with AWS Secrets Manager or Vault). Ensure compliance with industry standards and internal security policies. Conduct regular security audits and vulnerability assessments of the containerized environment. Collaborate with security teams to integrate security into the CI/CD pipeline. Required Qualifications: Bachelor's degree in Computer Science, Software Engineering, or a closely related field. 6 to 8 years of progressive experience in DevOps, Cloud Engineering, or Infrastructure Engineering roles. At least 3 years of hands-on experience with Kubernetes, including cluster deployment, management, and troubleshooting. Strong experience with Amazon Web Services (AWS), including foundational services (EC2, VPC, S3, IAM) and container services (EKS, ECS). Proficiency in scripting languages such as Python or Bash. Experience with Infrastructure as Code (IaC) tools like Terraform or AWS CloudFormation. Solid understanding of CI/CD principles and experience with relevant tools. Familiarity with monitoring and logging tools for distributed systems. Strong understanding of networking concepts (TCP/IP, DNS, Load Balancing) and their application in cloud and container environments. Excellent problem-solving, analytical, and communication skills.
Posted 1 month ago
5.0 - 8.0 years
18 - 30 Lacs
Gurugram, Bengaluru
Hybrid
Roles and Responsibilities Collaborate with cross-functional teams to identify security requirements and develop effective IAM strategies. Design, implement, and maintain Identity & Access Management (IAM) solutions using various tools such as Ansible, AWS, GCP, Docker, Kubernetes, Java. Develop automated deployment scripts using Terraform or similar technologies to ensure consistent infrastructure configuration across environments. Troubleshoot complex issues related to identity management systems and provide timely resolutions. Desired Candidate Profile 4-8 years of experience in cloud-based IAM engineering with expertise in at least two public clouds (AWS/GCP). Strong understanding of API integration concepts for seamless data exchange between systems. Proficiency in scripting languages like Python or GoLang for automating tasks and workflows. Experience working with containerization platforms like Docker and orchestration engines like Kubernetes. Immediate to Serving NP until month end
Posted 1 month ago
5.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Work from Office
We are looking for a skilled Azure DevOps Engineer with 5 to 8 years of experience in the field. The ideal candidate should have strong analytical and excellent communication skills, with experience working with Azure Cloud DevOps TerraForm IaC Data Warehouse. Roles and Responsibility Managing stakeholders and external interfaces effectively. Coordinating and communicating within the team and with customers. Working with client teams to achieve project goals. Implementing Scrum Agile methodology in projects. Analyzing and resolving technical issues efficiently. Collaborating with cross-functional teams to deliver high-quality solutions. Job Experience as an Azure DevOps Engineer. Strong knowledge of Azure Cloud DevOps TerraForm IaC Data Warehouse. Excellent analytical and communication skills. Ability to work with client teams and manage stakeholders. Familiarity with Scrum Agile methodology. Strong problem-solving skills and attention to detail.
Posted 1 month ago
6.0 - 8.0 years
15 - 25 Lacs
Hyderabad
Hybrid
We are seeking a highly skilled and experienced Senior Cloud Infrastructure Engineer - GCP to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Google Cloud Platform (GCP). Develop and maintain Infrastructure as Code (IaC) using Terraform for enterprise-scale deployments. Utilize Kubernetes deployment tools such as Helm/Kustomize for container orchestration and management. Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments. Ensure auditability and observability of pipeline states. Implement security best practices, audit, and compliance requirements within the infrastructure. Provide technical leadership, mentorship, and training to engineering staff. Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs. Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams. Qualifications: 5+ years of experience respectively in an Infrastructure Engineer or similar role. Extensive experience with Google Cloud Platform (GCP). Proven ability to architect for scale, availability, and high-performance workloads. Deep knowledge of Infrastructure as Code (IaC) with Terraform. Strong experience with Kubernetes and related tools (Helm, Kustomize). Solid understanding of CI/CD pipelines and deployment strategies. Experience with security, audit, and compliance best practices. Excellent problem-solving and analytical skills. Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders. Experience in technical leadership and mentoring. Experience with client relationship management and project planning.
Posted 1 month ago
5.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Were Hiring: ML Data Engineer Experience: 7+ Years Location: Bangalore / Chennai / Gurugram Company: Derisk360 Are you passionate about building scalable ML pipelines and deploying intelligent systems in the cloudAt Derisk360, we help transform businesses by combining AI expertise with deep domain knowledge We're seeking a Senior ML Engineer to take our ML infrastructure to the next level using AWS SageMaker and state-of-the-art MLOps practices What Youll Do: Develop, deploy, and manage ML models using AWS SageMaker, including pipelines, real-time endpoints, and batch transform jobs Architect infrastructure using Infrastructure as Code (IAC) tools such as Terraform or AWS CloudFormation Build robust, scalable solutions across real-time and batch systems with high reliability and performance Design secure and scalable AWS networking environments tailored for ML workflows Collaborate with data scientists, ML engineers, and DevOps to optimize model development and deployment pipelines Contribute to automation, CI/CD pipelines, and monitoring strategies for ML systems What You Bring: 7+ years of experience in software engineering, with at least 3+ years working on AWS SageMaker in production environments Proficient in AWS services (EC2, S3, Lambda, IAM, VPC, CloudWatch) and deep understanding of AWS networking concepts Solid experience with Terraform or CloudFormation for IAC implementation Strong programming skills in Python and hands-on experience with ML frameworks like TensorFlow, PyTorch, or Scikit-learn Familiarity with model training, evaluation, versioning, and operationalizing in cloud environments Nice to Have: Experience integrating ML workflows with data lakes or data pipelines Exposure to MLOps practices including experiment tracking, model registry, and containerization (Docker) Understanding of security best practices in AWS for machine learning What Youll Get: Competitive salary with performance-based bonuses Lead innovative ML projects with top-tier insurance and risk-tech clients Work in a tech-forward, collaborative environment Opportunities to grow within a high-impact, AI-focused team Access to complex cloud-native platforms and enterprise-grade ML systems
Posted 1 month ago
10.0 - 13.0 years
27 - 30 Lacs
Hyderabad
Hybrid
Proven PO/TPO experience in cloud/Dev Ops. Hands-on with Azure Dev Ops, Terra form, Kubernetes, CI/CD & IaC. Strong in Agile & stakeholder mg mt. .NET/C# & Azure certs a plus. Drive infra automation & cloud-native initiatives.
Posted 1 month ago
10.0 - 20.0 years
35 - 70 Lacs
Hyderabad
Work from Office
JOB DESCRIPTION: At High Radius, we pride ourselves in our people and products. We are looking for a highly motivated and experienced MySQL DB - Architect / Senior Architect - for our SaaS products at our Hyderabad office. Career growth would be in the form of individuals moving from architecture/design to management and leadership roles. We are seeking a highly experienced and visionary MySQL Architect to lead the design, implementation, and optimization of our critical database infrastructure, heavily leveraging AWS RDS and Aurora. This role demands a deep understanding of MySQL architecture, extensive experience with cloud-managed database services, and a proven ability to design scalable, highly available, and cost-effective database solutions. The ideal candidate will possess exceptional problem-solving skills, strong leadership qualities, and the ability to collaborate effectively with engineering teams to drive our data strategy forward. RESPONSIBILITIES: Database Architecture and Design: Design and architect robust, scalable, and highly available MySQL database solutions on AWS RDS and Aurora, considering factors such as performance, security, cost-efficiency, and disaster recovery. Cloud Database Management: Lead the deployment, configuration, management, and monitoring of MySQL databases in AWS RDS and Aurora environments / Azure / GCP Cloud SQL environments. Performance Optimization: Identify and resolve complex performance bottlenecks in cloud-based MySQL deployments, utilizing tools and techniques specific to AWS RDS and Aurora. High Availability and Disaster Recovery: Architect and implement high availability (HA) and disaster recovery (DR) strategies for MySQL on AWS, including multi-AZ deployments, read replicas, and backup/restore mechanisms. Security Architecture: Define and implement security best practices for MySQL databases in AWS, including network security, encryption (at rest and in transit), IAM policies, and audit logging. Cost Optimization: Continuously evaluate and optimize database costs on AWS RDS and Aurora by leveraging appropriate instance types, storage options, and scaling strategies. Migration and Upgrades: Plan and execute database migrations to AWS RDS and Aurora, as well as manage version upgrades and patching with minimal downtime. Automation and Infrastructure as Code (IaC): Develop and implement automation scripts and IaC templates (e.g., CloudFormation, Terraform) for provisioning and managing MySQL infrastructure on AWS. Capacity Planning and Forecasting: Analyze database growth trends and forecast future capacity needs for our cloud-based MySQL environments. Data Modeling and Schema Design: Provide expert guidance on data modeling and schema design best practices to ensure optimal performance and scalability. Troubleshooting and Incident Management: Lead the investigation and resolution of critical database incidents in AWS environments, ensuring timely communication and root cause analysis. SKILLS: Experience Range: 9 to 25 Years Role: final role will depend on the candidates experience and credentials Education: BE/B.Tech/MCA/M.Sc./M.S/M.E/M.Tech Technology Stack: AWS RDS for MySQL and Aurora MySQL, including architecture, configuration, management, monitoring, troubleshooting and focus on production environments Location: Hyderabad Other Requirement: Extensive and deep experience with AWS RDS for MySQL and Aurora MySQL , including architecture, configuration, management, monitoring, and troubleshooting. Proven ability to design and implement highly available and scalable MySQL solutions on AWS. Strong understanding of AWS database security best practices and services (e.g., VPC, Security Groups, IAM, KMS). Expertise in performance tuning and optimization techniques specifically within AWS RDS and Aurora. Solid experience with database backup and recovery strategies on AWS, including AWS Backup. Proficiency in using AWS CLI and SDKs for database management and automation. Experience with Infrastructure as Code (IaC) tools like AWS CloudFormation or Terraform for database provisioning. Strong scripting skills in languages such as Python or Bash for automation. Excellent knowledge of database monitoring tools, including AWS CloudWatch and potentially third-party solutions. Deep understanding of MySQL replication technologies (including Group Replication) and their implementation on AWS. Experience with database migration methodologies and tools for moving to AWS RDS and Aurora.
Posted 1 month ago
6.0 - 10.0 years
12 - 16 Lacs
Pune
Work from Office
We are on the lookout for a hands-on DevOps / SRE expert who thrives in a dynamic, cloud-native environment! Join a high-impact project where your infrastructure and reliability skills will shine.. Key Responsibilities. Design & implement resilient deployment strategies (Blue-Green, Canary, GitOps). Manage observability tools: logs, metrics, traces, and alerts. Tune backend services & GKE workloads (Node.js, Django, Go, Java). Build & manage Terraform infra (VPC, CloudSQL, Pub/Sub, Secrets). Lead incident responses & perform root cause analyses. Standardize secrets, tagging & infra consistency across environments. Enhance CI/CD pipelines & collaborate on better rollout strategies. Must-Have Skills. 510 years in DevOps / SRE / Infra roles. Kubernetes (GKE preferred). IaC with Terraform & Helm. CI/CD: GitHub Actions + GitOps (ArgoCD / Flux). Cloud architecture expertise (IAM, VPC, Secrets). Strong scripting/coding & backend debugging skills (Node.js, Django, etc.) ?. Incident management with tools like Datadog & PagerDuty. Excellent communicator & documenter. Tech Stack. GKE, Kubernetes, Terraform, Helm. GitHub Actions, ArgoCD / Flux. Datadog, PagerDuty. CloudSQL, Cloudflare, IAM, Secrets. You're. A proactive team player & strong individual contributor. Confident yet humble. Curious, driven & always learning. Not afraid to solve deep infrastructure challenges. (ref:hirist.tech). Show more Show less
Posted 1 month ago
10.0 - 15.0 years
19 - 22 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled AWS Architect to design, implement, and optimize secure, scalable, and compliant AWS infrastructure solutions across multiple regions. The ideal candidate will have hands-on experience with global and region-specific AWS environments, deep knowledge of cloud architecture best practices, and a strong understanding of infrastructure automation. Design and implement AWS architectures that meet performance, security, scalability, and compliance requirements. Architect and configure VPCs, subnets, routing, firewalls, and networking components in multi-region environments. Manage traffic routing using AWS Route 53, including geolocation and latency-based routing configurations. Implement high availability (HA), disaster recovery (DR), and fault-tolerant designs for business-critical applications. Ensure alignment with security best practices and compliance policies, including IAM roles, encryption, and access controls. Collaborate with cross-functional teams to support cloud adoption, application migration, and system integration. Develop infrastructure-as-code (IaC) templates for consistent and automated provisioning.
Posted 1 month ago
6.0 - 11.0 years
5 - 9 Lacs
Bengaluru
Work from Office
AWS Docker with shell scripting Terraform minimum 6 plus years of experience in Devops with hands on experience of creating pipelines independently Strong AWS knowledge CICD with gitlab Expert knowledge in Terraform pulumi with Typescript for IAC This is a critical requirement as per the current tech stack Docker with shell scripting Dynatrace for Observability
Posted 1 month ago
7.0 - 12.0 years
8 - 12 Lacs
Hyderabad
Work from Office
Azure DevopsMandatory skills Azure Infra, Azure Devops, Kubernetes, IaC (Terraform). Secondary skillsPython, Azure PaaS Services, Azure Functions, Azure App Services, Azure Active Directory Proficiency in Azure cloud services, including virtual machines, containers, networking, and databases. Experience in designing, implementing, and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines using Azure DevOps, Jenkins, or similar tools.Knowledge of Infrastructure as Code tools like Terraform, ARM templates, or Azure Bicep for automating infrastructure deployment. Expertise in version control systems, particularly Git, for managing and tracking code changes. Strong PowerShell, Bash, or Python scripting skills for automating tasks and processes. Experience with monitoring and logging tools like Azure Monitor, Log Analytics, and Application Insights for performance and reliability management. Understanding security best practices, including role-based access control (RBAC), Azure Policy, and managing secrets with tools like Azure Key Vault. Ability to collaborate effectively with development, operations, and security teams, with strong communication skills to drive DevOps culture. Knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes on Azure Kubernetes Service (AKS). Strong problem-solving abilities to troubleshoot and resolve complex technical issues related to DevOps processes.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Bengaluru
Work from Office
AWS Docker with shell scripting Terraform minimum 6 plus years of experience in Devops with hands on experience of creating pipelines independently Strong AWS knowledge CICD with gitlab Expert knowledge in Terraform pulumi with Typescript for IAC This is a critical requirement as per the current tech stack Docker with shell scripting Dynatrace for Observability
Posted 1 month ago
7.0 - 12.0 years
8 - 12 Lacs
Gurugram
Work from Office
Experience in AWS SageMaker development, pipelines, real-time and batch transform jobs Expertise in AWS, Terraform / Cloud formation for IAC Experience in AWS networking concepts Experience in coding skills python, TensorFlow, pytorch or scikit-learn.
Posted 1 month ago
7.0 - 12.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Must have a Computer Science, Software Engineering, Engineering related degree or equivalent, with 7-10 years experience including a minimum of 5 years experience in a similar role, ideally in multisite hybrid/cloud first environment. Hands-on development experience coding in Python (mandatory) Hands-on development experience with NOSQL (preferable Cosmos) Extensive experience and knowledge with Azure (Azure Cosmos DB, Azure functions, Pipelines, Devops, Kubernetes, Storage) Mandatory Must have excellent knowledge of Microsoft Azure products and how to implement themEnterprise Apps, Azure Functions, Cosmos DB, Containers, Event Grid, Logic Apps, Service Bus, Data Factory Must have hands-on experience of object-oriented development and applied its principles in multiple solutions designDomain Driven, Tiered Applications, Micro-services Must have good knowledge of .NET, C# the standard libraries as well as JavaScript (in a Vue.js context). Knowledge of other development and scripting languages is appreciated (Python, Java, Typescript, PowerShell, bash) Must have knowledge of development and deployment tools (IaC, git, docker etc). Comfortable working with API, webhooks, data transfer, workflow technologies. Deep Understanding of software architecture and the long-term implication of architectural choices. Familiar with Agile project management and continuous delivery.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France