Home
Jobs

7730 Terraform Jobs - Page 24

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

TCS Hiring for Azure Cloud Architect (Platform)_PAN India Experience: 8 to 15 Years Only Job Location: PAN India TCS Hiring for Azure Cloud Architect (Platform)_PAN India Required Technical Skill Set: Proven experience as a Solution Architect with a focus on Microsoft Azure. Good knowledge on Application development and migration Knowledge in Java or .Net Strong knowledge of Azure services: Azure Kubernetes Service (AKS), Azure App Services, Azure Functions, Azure Storage, and Azure DevOps. Experience in cloud-native application development and containerization (Docker, Kubernetes). Proficiency in Infrastructure as Code (IaC) tools (e.g., Terraform, ARM templates, Bicep). Strong knowledge of Azure Active Directory, identity management, and security best practices. Hands-on experience with CI/CD processes and DevOps practices. Knowledge of networking concepts in Azure (VNets, Load Balancers, Firewalls). Excellent communication and stakeholder management skills. Key Responsibilities: Design end-to-end cloud solutions leveraging Microsoft Azure services. Develop architecture and solution blueprints that align with business objectives. Lead cloud adoption and migration strategies. Collaborate with development, operations, and security teams to implement best practices. Ensure solutions meet performance, scalability, availability, and security requirements. Optimize cloud cost and performance. Oversee the deployment of workloads on Azure using IaaS, PaaS, and SaaS services. Implement CI/CD pipelines, automation, and infrastructure as code (IaC). Stay updated on emerging Azure technologies and provide recommendations. Kind Regards, Priyankha M Show more Show less

Posted 2 days ago

Apply

3.0 - 4.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

Job Title - DevOps Engineer Location - Surat (On-site ) Experience - 3-4 years Job Summary: We are looking for a DevOps Engineer to help us build functional systems that improve customer experience. DevOps Engineer responsibilities include deploying product updates, identifying production issues, and implementing integrations that meet customer needs. If you have a solid background in software engineering and are familiar with Ruby or Python, we’d like to meet you. Ultimately, you will execute and automate operational processes quickly, accurately, and securely. Roles & Responsibilities: Strong experience with essential DevOps tools and technologies including Kubernetes , Terraform , Azure DevOps , Jenkins , Maven , Git , GitHub , and Docker . Hands-on experience in Azure cloud services , including: Virtual Machines (VMs) Blob Storage Virtual Network (VNet) Load Balancer & Application Gateway Azure Resource Manager (ARM) Azure Key Vault Azure Functions Azure Kubernetes Service (AKS) Azure Monitor, Log Analytics, and Application Insights Azure Container Registry (ACR) and Azure Container Instances (ACI) Azure Active Directory (AAD) and RBAC Creative in automating, configuring, and deploying infrastructure and applications across Azure environments and hybrid cloud data centers. Build and maintain CI/CD pipelines using Azure DevOps , Jenkins , and scripting for scalable SaaS deployments. Develop automation and infrastructure-as-code (IaC) using Terraform , ARM Templates , or Bicep for managing and provisioning cloud resources. Expert in managing containerized applications using Docker and orchestrating them via Kubernetes (AKS). Proficient in setting up monitoring , logging , and alerting systems using Azure-native tools and integrating with third-party observability stacks. Experience implementing auto-scaling , load balancing , and high-availability strategies for cloud-native SaaS applications. Configure and maintain CI/CD pipelines and integrate with quality and security tools for automated testing , compliance , and secure deployments . Deep knowledge in writing Ansible playbooks and ad hoc commands for automating provisioning and deployment tasks across environments. Experience integrating Ansible with Azure DevOps/Jenkins for configuration management and workflow automation. Proficient in using Maven and Artifactory for build management and writing POM.xml scripts for Java-based applications. Skilled in GitHub repository management , including setting up project-specific access, enforcing code quality standards, and managing pull requests. Experience with web and application servers such as Apache Tomcat for deploying and troubleshooting enterprise-grade Java applications. Ability to design and maintain scalable , resilient , and secure infrastructure to support rapid growth of SaaS applications. Qualifications & Requirements: Proven experience as a DevOps Engineer , Site Reliability Engineer , or in a similar software engineering role. Strong experience working in SaaS environments with a focus on scalability, availability , and performance . Proficiency in Python or Ruby for scripting and automation. Working knowledge of SQL and database management tools. Strong analytical and problem-solving skills with a collaborative and proactive mindset. Familiarity with Agile methodologies and ability to work in cross-functional teams . Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About The Role We're seeking an experienced Infrastructure Engineer to join our platform team, handling massive-scale data processing and analytics infrastructure that supports over 5B+ events and more 5M+ DAU .We’re looking for someone who can help us scale gracefully while optimizing for performance, cost, and resiliency. Key Responsibilities Design, implement, and manage our AWS infrastructure, with a strong emphasis on automation, resiliency, and cost-efficiency. Develop and oversee scalable data pipelines (for event processing, transformation, and delivery). Implement and manage stream processing frameworks (such as Kinesis, Kafka, or MSK). Handle orchestration and ETL workloads, employing services like AWS Glue, Athena, Databricket, Redshift, or Apache Airflow. Implement robust network, storage, and backup strategies for growing workloads. Monitor, debug, and resolve production issues related to data and infrastructure in real time. Implement IAM controls, logging, alerts, and Security Best Practices across all components. Provide deployment automation (Docker, Terraform, CloudFormation) and collaborate with application engineers to enable smooth delivery. Build SOP for support and setup a functioning 24*7 support system (including hiring right engineers) to ensure system uptime and availability Required Technical Skills 5+ years of experience with AWS services (VPC, EC2, S3, Security Groups, RDS, Kinesis, MSK, Redshift, Glue). Experience designing and managing large-scale data pipelines with high-throughput workloads. Ability to handle 5 billion events/day and 1M+ concurrent users’ workloads gracefully. Familiar with scripting (Python, Terraform) and automation practices (Infrastructure as Code). Familiar with network fundamentals, Linux, scaling strategies, and backup routines. Collaborative team player — able to work with engineers, data analysts, and stakeholders. Preferred Tools & Technologies AWS: EC2, S3, VPC, Security Groups, RDS, Redshift, DocumentDB, MSK, Glue, Athena, CloudWatch Infrastructure as Code: Terraform, CloudFormation Scripted automation: Python, Bash Container orchestration: Docker, ECS or EKS Workflow orchestration: Apache Airflow, Dagster Streaming framework: Apache Kafka, Kinesis, Flink Other: Linux, Git, Security best practices (IAM, Security Groups, ACM) Education Bachelor's/Master's degree in Computer Science, Data Science, or related field Relevant professional certifications in cloud platforms or data technologies Why Join Us? Opportunity to work in a fast-growing audio and content platform. Exposure to multi-language marketing and global user base strategies. A collaborative work environment with a data-driven and innovative approach. Competitive salary and growth opportunities in marketing and growth strategy. Success Metrics ✅ Scalability: Ability to handle 1+ billion events/day with low latency and high resiliency. ✅ Cost-efficiency: Reduction in AWS operational costs by optimizing services, storage, and data transfer. ✅ Uptime/SLI: Achieve 99.9999% platform and pipeline uptimes with automated fallback mechanisms. ✅ Data delivery latency: Reduce event delivery latency to under 5 minutes for real-time processing. ✅ Security and compliance: Implement controls to pass PCI-DSS or SOC 2 audits with zero major findings. ✅ Developer productivity: Improve team delivery speed by self-service IaC modules and automated routines. About KUKU Founded in 2018, KUKU is India’s leading storytelling platform, offering a vast digital library of audio stories, short courses, and microdramas. KUKU aims to be India’s largest cultural exporter of stories, culture and history to the world with a firm belief in “Create In India, Create For The World”. We deliver immersive entertainment and education through our OTT platforms: Kuku FM, Guru, Kuku TV, and more. With a mission to provide high-quality, personalized stories across genres from entertainment across multiple formats and languages, KUKU continues to push boundaries and redefine India’s entertainment industry. 🌐 Website: www.kukufm.com 📱 Android App: Google Play 📱 iOS App: App Store 🔗 LinkedIn: KUKU 📢 Ready to make an impact? Apply now Skills: aws services,bash,networking,kafka,data pipeline,docker,kinesis,data pipelines,etl,terraform,automation,aws,security,ec2,cloudformation,cloud,scripting,linux,infrastructure,amazon redshift,python,vpc,network fundamentals,workflow orchestration,stream processing frameworks,container orchestration,dagster,airflow,s3 Show more Show less

Posted 2 days ago

Apply

2.0 years

0 Lacs

Gautam Buddha Nagar, Uttar Pradesh, India

On-site

Linkedin logo

We are seeking a dynamic and experienced Technical Trainer to join our engineering department. The ideal candidate will be responsible for designing and delivering technical training sessions to B.Tech students across various domains, ensuring they are industry-ready and equipped with practical, job-oriented skills. Role & Responsibility To train the students in new age technology (computer Science Engineering) to bridge the industry & academia gap leading to increase in the employability of the students. Knowledge Proven experience in devising technical training programs to UG/PG Engineering students in Higher Education Institutions To be abreast in latest software as per Industry standard & having knowledge of modern training techniques and tools to deliver the technical subjects To prepare training material (presentations, worksheets etc.) To execute training sessions, webinars, workshops for students To determine overall effectiveness of programs and make improvements Technical Skills (Subject Areas of delivering Training with Practical Approach) 1. Core Programming Skills Languages: C, Python, Java, C++, JavaScript 2. Web Development Frontend: HTML, CSS, JavaScript, React.js/Next.js Backend: Node.js, Express, Django, or Spring Boot Full-Stack: MERN stack (MongoDB, Express, React, Node.js) 3. Data Science & Machine Learning Languages: Python (NumPy, pandas, scikit-learn, TensorFlow/PyTorch) Tools: Jupyter Notebook, Google Colab, MLFlow 4. AI & Generative AI LLMs (Large Language Models): Understand how GPT, BERT, Llama models work Prompt Engineering Fine-tuning & RAG (Retrieval-Augmented Generation) Hugging Face Transformers, LangChain, OpenAI APIs 5. Cloud Computing & DevOps Cloud Platforms: AWS, Microsoft Azure, Google Cloud Platform (GCP) DevOps Tools: Docker, Kubernetes, GitHub Actions, Jenkins, Terraform CI/CD Pipelines: Automated testing and deployment 6. Cybersecurity Basics: OWASP Top 10, Network Security, Encryption, Firewalls Tools: Wireshark, Metasploit, Burp Suite 7. Mobile App Development Native: Kotlin (Android), Swift (iOS) Cross-platform: Flutter, React Native 8. Blockchain & Web3 Technologies: Ethereum, Solidity, Smart Contracts Frameworks: Hardhat, Truffle 9. Database & Big Data Databases: SQL (MySQL, PostgreSQL), NoSQL (MongoDB, Redis) Big Data Tools: Apache Hadoop, Spark, Kafka Qualification & Years of Experience as per norms: B.Tech./MCA/M.Tech (IT/CSE) from Top tier Institutes & reputed universities Industry Experience is desirable. Candidate must have minimum 2 years of training experience in the same domain. Show more Show less

Posted 2 days ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Warm Greetings from SP Staffing Services Pvt Ltd!!!! Experience:5-8yrs Work Location :Hyderabad Interested candidates, Kindly share your updated resume to ramya.r@spstaffing.in or contact number 8667784354 (Whatsapp:9597467601) to proceed further

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary: As part of the Cloud network team in Thomson Reuters you will work on delivering world class infrastructure services to our customers using latest technologies. We are looking for Senior Network Cloud Engineer who can help us design and implement secure, scalable, highly available network architectures in AWS, Azure, OCI & GCP. You will be working in agile teams and will get opportunity to learn new technologies and tools. About the Role: In this role as a Senior Network Cloud Engineer, you will: Work closely with Architecture and business teams to understand their requirements and translate them into robust, reliable and highly available network designs. Collaborate with security team to ensure compliance with security policies and best practices. Design, provision and configure networks in all cloud providers. Implement automation solutions to reduce manual intervention and increase efficiency. Participate in on call support activities and perform post implementation reviews to identify any issues or room for improvement. Stay up to date with the latest trends and advancements in cloud computing and related technologies. Maintain documentation of system designs, configurations and procedures. Contribute to knowledge base articles and technical guides. Actively participate in code reviews, sprint ceremonies and other Agile/Scrum activities. About You: You're a fit for the role of Senior Network Cloud Engineer if your background includes: Bachelor’s degree in computer science, information technology or related field. Master’s degree preferred but not required. At least 5 years of experience in designing, implementing and managing large scale network architectures in public clouds (AWS, Azure, Google). Strong understanding of network protocols such as TCP/IP, DNS, HTTP, SSL etc. Experience with configuration management tools such as Terraform, Ansible, Chef, Puppet etc. Excellent scripting skills using Python, PowerShell, Bash etc. Proficiency in at least one object-oriented programming language like Java, C#, Python etc. Familiarity with automated testing frameworks such as Junit, NUnit, Pytest etc. Practical experience writing unit tests and integration tests. Understanding of continuous integration and continuous deployment pipelines. Knowledge of version control systems such as Git. Ability to communicate effectively both verbally and written. Team player mentality with ability to collaborate across multiple disciplines. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer III Full-time McDonald's Office Location: Hyderabad Global Grade: G4 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less

Posted 2 days ago

Apply

4.0 - 9.0 years

7 - 12 Lacs

Mumbai

Work from Office

Naukri logo

Site Reliability Engineers (SREs) - Robust background in Google Cloud Platform (GCP) | RedHat OpenShift administration Responsibilities: System Reliability: Ensure the reliability and uptime of critical services and infrastructure. Google Cloud Expertise: Design, implement, and manage cloud infrastructure using Google Cloud services. Automation: Develop and maintain automation scripts and tools to improve system efficiency and reduce manual intervention. Monitoring and Incident Response: Implement monitoring solutions and respond to incidents to minimize downtime and ensure quick recovery. Collaboration: Work closely with development and operations teams to improve system reliability and performance. Capacity Planning: Conduct capacity planning and performance tuning to ensure systems can handle future growth. Documentation: Create and maintain comprehensive documentation for system configurations, processes, and procedures. Qualifications: Education: Bachelors degree in computer science, Engineering, or a related field. Experience: 4+ years of experience in site reliability engineering or a similar role. Skills: Proficiency in Google Cloud services (Compute Engine, Kubernetes Engine, Cloud Storage, BigQuery, Pub/Sub, etc.). Familiarity with Google BI and AI/ML tools (Looker, BigQuery ML, Vertex AI, etc.) Experience with automation tools (Terraform, Ansible, Puppet). Familiarity with CI/CD pipelines and tools (Azure pipelines Jenkins, GitLab CI, etc.). Strong scripting skills (Python, Bash, etc.). Knowledge of networking concepts and protocols. Experience with monitoring tools (Prometheus, Grafana, etc.). Preferred Certifications: Google Cloud Professional DevOps Engineer Google Cloud Professional Cloud Architect Red Hat Certified Engineer (RHCE) or similar Linux certification

Posted 2 days ago

Apply

4.0 years

0 Lacs

Kerala, India

Remote

Linkedin logo

About FriskaAi FriskaAi is a powerful AI-enabled, EHR-agnostic platform designed to help healthcare providers adopt an evidence-based approach to care. Our technology addresses up to 80% of chronic diseases, including obesity and type 2 diabetes, enabling better patient outcomes. 📍 Location: Remote 💼 Job Type: Full-Time Job Description We are seeking a highly skilled Backend Developer to join our team. The ideal candidate will have expertise in Python and Django , with experience in SQL and working in a cloud-based environment on Microsoft Azure . You will be responsible for designing, developing, and optimizing backend systems that drive our healthcare platform and ensure seamless data flow and integration. Key Responsibilities Backend Development Develop and maintain scalable backend services using Python and Django. Build and optimize RESTful APIs for seamless integration with frontend and third-party services. Implement efficient data processing and business logic to support platform functionality. Database Management Design and manage database schemas using Azure SQL or PostgreSQL. Write and optimize SQL queries, stored procedures, and functions. Ensure data integrity and security through proper indexing and constraints. API Development & Integration Develop secure and efficient RESTful APIs for frontend and external integrations. Ensure consistent and reliable data exchange between systems. Optimize API performance and scalability. Cloud & Infrastructure Deploy and manage backend applications on Azure App Service and Azure Functions. Set up and maintain CI/CD pipelines using Azure DevOps. Implement monitoring and logging using Azure Application Insights. Microservices Architecture Design and implement microservices to modularize backend components. Ensure smooth communication between services using messaging queues or REST APIs. Optimize microservices for scalability and fault tolerance. Testing & Debugging Write unit and integration tests using Pytest. Debug and resolve production issues quickly and efficiently. Ensure code quality and reliability through regular code reviews. Collaboration & Optimization Work closely with frontend developers, product managers, and stakeholders. Conduct code reviews to maintain high-quality standards. Optimize database queries, API responses, and backend processes for maximum performance. Qualifications Education & Experience 🎓 Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) 🔹 2–4 years of backend development experience Technical Skills ✔ Proficiency in Python and Django ✔ Strong expertise in SQL (e.g., Azure SQL, PostgreSQL, MySQL) ✔ Experience with RESTful API design and development ✔ Familiarity with microservices architecture ✔ Hands-on experience with Azure services, including: • Azure App Service • Azure Functions • Azure Storage • Azure Key Vault ✔ Experience with CI/CD using Azure DevOps ✔ Proficiency with version control tools like Git ✔ Knowledge of containerization with Docker Soft Skills 🔹 Strong problem-solving skills and attention to detail 🔹 Excellent communication and teamwork abilities 🔹 Ability to thrive in a fast-paced, agile environment Preferred Skills (Nice to Have) ✔ Experience with Kubernetes (AKS) for container orchestration ✔ Knowledge of Redis for caching ✔ Experience with Celery for asynchronous task management ✔ Familiarity with GraphQL for data querying ✔ Understanding of infrastructure as code (IaC) using Terraform or Bicep What We Offer ✅ Competitive salary & benefits package ✅ Opportunity to work on cutting-edge AI-driven solutions ✅ A collaborative and inclusive work environment ✅ Professional development & growth opportunities 🚀 If you’re passionate about backend development and eager to contribute to innovative healthcare solutions, we’d love to hear from you! 🔗 Apply now and be part of our mission to transform healthcare! Show more Show less

Posted 2 days ago

Apply

0.0 years

0 Lacs

Vijay Nagar, Indore, Madhya Pradesh

On-site

Indeed logo

Job Title: AWS DevOps Engineer Internship Company: Inventurs Cube LLP Location: Indore, Madhya Pradesh Job Type: Full-time Internship Duration: 1 to 3 months Responsibilities: Assist in the design, implementation, and maintenance of AWS infrastructure using Infrastructure as Code (IaC) principles (e.g., CloudFormation, Terraform). Learn and apply CI/CD (Continuous Integration/Continuous Deployment) pipelines for automated software releases. Support the monitoring and logging of AWS services to ensure optimal performance and availability. Collaborate with development teams to understand application requirements and implement appropriate cloud solutions. Help troubleshoot and resolve infrastructure-related issues. Participate in security best practices implementation and review. Contribute to documentation of cloud architecture, configurations, and processes. Stay updated with the latest AWS services and DevOps trends. What We're Looking For: Currently pursuing a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Basic understanding of cloud computing concepts, preferably AWS. Familiarity with at least one scripting language (e.g., Python, Bash). Knowledge of Linux/Unix operating systems. Eagerness to learn and a strong problem-solving aptitude. Excellent communication and teamwork skills. Ability to work independently and take initiative. Bonus Points (Not Mandatory, but a Plus): Prior experience with AWS services (e.g., EC2, S3, VPC, IAM). Basic understanding of version control systems (e.g., Git). Exposure to containerization technologies (e.g., Docker, Kubernetes). Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI, AWS CodePipeline). What You'll Gain: Hands-on experience with industry-leading AWS cloud services and DevOps tools. Mentorship from experienced AWS DevOps engineers. Exposure to real-world projects and agile development methodologies. Opportunity to build a strong foundation for a career in cloud and DevOps. A dynamic and supportive work environment in Indore. Certificate of internship completion. [ Optional: Mention if there's a possibility of full-time employment after successful completion of the internship.] Job Types: Full-time, Fresher, Internship Contract length: 3 months Pay: ₹15,000.00 - ₹20,000.00 per month Schedule: Day shift Work Location: In person Speak with the employer +91 9685458368

Posted 2 days ago

Apply

11.0 - 21.0 years

30 - 45 Lacs

Mumbai Suburban, Navi Mumbai, Mumbai (All Areas)

Work from Office

Naukri logo

Min 11 to 20 yrs with exp in tools like Azure DevOps Jenkins GitLab GitHub Docker Kubernetes Terraform Ansible Exp on Dockerfile & Pipeline codes Exp automating tasks using Shell Bash PowerShell YAML Exposure in .NET Java ProC PL/SQL Oracle/SQL REDIS Required Candidate profile Exp in DevOps platform from ground up using tools at least for 2 projects Implement in platform for Req tracking cod mgmt release mgmt Exp in tools such as AppDynamics Prometheus Grafana ELK Stack Perks and benefits Addnl 40% Variable + mediclaim

Posted 2 days ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

Pune, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities Proven experience with CI/CD tools like Red Hat Ansible, Kubernetes, Prometheus, GitHub, Atlassian Jira, Confluence, and Jenkins. Must have Groovy/Shell scripting knowledge. Good to have Python, Perl or Ruby scripting knowledge Practical familiarity with public cloud resources and services, like Google Cloud. Good to have Terraform knowledge Familiarity with various IT monitoring and management tools like Datadog. Proficiency with container technologies like Docker and Kubernetes. Proficiency in troubleshooting and resolving technical issues across test and production environments. 5+ years of experience as a DevOps Engineer Candidate should be able to work independently without much/ minimum guidance

Posted 2 days ago

Apply

8.0 - 15.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Company Name: ANZ Experience: 8+ Years Location: Bangalore (Hybrid) Interview Mode: Virtual Interview Rounds: 2 Rounds Notice Period: Immediate to 30 days Generic Responsibilities : Design, develop, test, deploy and maintain scalable Node.js applications using microservices architecture. Collaborate with cross-functional teams to identify requirements and implement solutions that meet business needs. Ensure high availability, scalability, security and performance of the application by implementing monitoring tools such as Kafka or Kubernetes. Troubleshoot issues related to API integrations with third-party services like Terraform. Generic Requirements : 8-15 years of experience in software development with expertise in Node.js programming language. Strong understanding of microservices architecture principles and design patterns. Experience with containerization using Kubernetes or similar technologies (e.g. Ansible). Proficiency in working with message queues like Kafka for building real-time data pipelines.

Posted 2 days ago

Apply

4.0 - 6.0 years

7 - 9 Lacs

Pune

Work from Office

Naukri logo

Managing stakeholders and external interfaces, will be responsible for the smooth operation of a company's IT infrastructure, must have a deep understanding of both development and operations processes, as well as a strong technical background

Posted 2 days ago

Apply

162.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Birlasoft: Birlasoft, a powerhouse where domain expertise, enterprise solutions, and digital technologies converge to redefine business processes. We take pride in our consultative and design thinking approach, driving societal progress by enabling our customers to run businesses with unmatched efficiency and innovation. As part of the CKA Birla Group, a multibillion-dollar enterprise, we boast a 12,500+ professional team committed to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our dedication to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. About the Job – : Familiar with Cloud Engineering to leverage Cloud and DevOps based technologies provided by the Platform teams. Collaborates with the Product Manager to align technical solutions with business goals and serves as the escalation point for cloud engineering issues. Support the Product technical architecture, alignment to the technology roadmap, and technical engineering standards. Job Title - Sr Technical Lead Location: Pune Educational Background: Bachelor's degree in Computer Science, Information Technology, or related field. Key Responsibilities - This individual will assist with setting up and provisioning architecture, optimizing efforts for infrastructure, deploying best practices and excellence in automation techniques. Some great technical skillsets for this individual to possess would be the following: Azure or AWS certifications DevOps certification Scripting certification (preferably python) Previous Agile experience Experience with at least some automation tools such as ansible, puppet, Chef, Salt, and Terraform. Exp 6-9 years Show more Show less

Posted 2 days ago

Apply

7.0 - 12.0 years

12 - 18 Lacs

Pune, Chennai, Coimbatore

Hybrid

Naukri logo

Hiring "Azure & devops" for Pune/Chennai/Coimbatore Locations. Overall Experience: 6- 12 yrs If you are interested in the below-mentioned position, please share your updated CV to sandhya_allam@epam.com along with the following details: Shortlisted applicants will be contacted directly. 1. Have you applied for a role in EPAM in the recent times 2. Years of Experience in Azure Cloud and DevOps Solutions 3. Years of Experience in Docker & Kubernetes 4. Years of Experience in Terraform 5. Experience in python/Bash/powershell : 6. Current Salary 7.Expected Salary 8. Notice Period (Negotiable or Mandate Responsibilities : Responsible for fault-tolerance, high-availability, scalability, and security on AZURE Infra and Platform. Responsible for implementation of CI/CD pipelines with automated build and test systems. Responsible for Production Deployment using Multiple Deployment Strategies. Responsible for Automating the AZURE Infrastructure and Platform Deployment with IAAC. Responsible for Automating System Configurations using Configuration Management Tools. Hands on Production Experience with AZURE Compute Service: VM Management, VMSS, AKS, Container Instance, Autoscaling, Load Balancers, Spot Instances, App Service,. Hands on Production Experience with AZURE Network Service: VNET, Subnets, Express Route, Azure Gateway, VPN, Load Balancer, DNS, Traffic Manager, CDN, Front Door, Private Link, Network Watcher Good Automation Skills using AZURE Orchestration Tools- Terraform, Ansible, ARM & CLI. Hands on Production experience in Docker and Container Orchestration using AKS, ACR. Ability to write scripts (Linux/shell/Python/PowerShell/Bash/CLI) to automate Cloud Automation Tasks

Posted 2 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role Description Job Title : Python developer with AWS Experience : 5+yrs Location : Hyderabad Notice period: Imemdiate joiners only(0-10days) Primary skills: Python developer, AWS(S3, EC2, Lambda, API) Detailed Job Description 5+ years of work experience using Python and AWS for developing enterprise software applications Experience in Apache Kafka, including topic creation, message optimization, and efficient message processing Skilled in Docker and container orchestration tools such as Amazon EKS or ECS Strong experience managing AWS components, including Lambda (Java), API Gateway, RDS, EC2, CloudWatch Experience working in an automated DevOps environment, using tools like Jenkins, SonarQube, Nexus, and Terraform for deployments Hands-on experience with Java-based web services, RESTful approaches, ORM technologies, and SQL procedures in Java. Experience with Git for code versioning and commit management Experience working in Agile teams with a strong focus on collaboration and iterative development Ability to implement changes following standard turnover procedures, with a CI/CD focus Bachelors or Masters degree in computer science, Information Systems or equivalent Skills Python Developer ,Api design,Architecture, AWS, Oops, S3, Django, fast API, Flask Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

About the Role At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution. Our flagship platform is currently under development. As our DevOps Engineer , you will bridge our backend systems (strategy engine, broker APIs) and frontend applications (analytics dashboards, client portals). You will own the design and execution of scalable infrastructure, CI/CD automation, and system observability in a high-frequency, multi-tenant trading environment. This role is central to deploying our containerized strategy engine (Lean-based), while ensuring data integrity, latency optimization, and cost-efficient scalability. We are a remote-first team and are open to hiring exceptional candidates globally. Key Responsibilities Design secure, scalable environments for containerized, multi-tenant API services and user-isolated strategy runners. Implement low-latency cloud infrastructure across development, staging, and production environments. Automate the CI/CD lifecycle, from pipeline design to versioned production deployment (GitHub Actions, GitLab CI, etc.). Manage Dockerized containers and orchestrate deployment with Kubernetes, ECS, or similar systems. Collaborate with backend and frontend teams to define infrastructure and deployment workflows. Optimize and monitor high-throughput data pipelines for strategy engines using tools like ClickHouse. Integrate observability stacks: Prometheus, Grafana, ELK, or Datadog for logs, metrics, and alerts. Support automated rollbacks, canary releases, and resilient deployment practices. Automate infrastructure provisioning using Terraform or Ansible (Infrastructure as Code). Ensure system security, audit readiness (SOC2, GDPR, SEBI), and comprehensive access control logging. Contribute to high-availability architecture and event-driven design for alerting and strategy signals. Technical Competencies Required Cloud: AWS (preferred), GCP, or Azure. Containerization: Proficiency with Docker and orchestration tools (Kubernetes, ECS, etc.). CI/CD: Experience with YAML-based pipelines using GitHub Actions, GitLab CI/CD, or similar tools. Data Systems: Familiarity with PostgreSQL, MongoDB, ClickHouse, or Supabase. Monitoring: Setup and scaling of observability tools like Prometheus, ELK Stack, or Datadog. Distributed Systems: Strong understanding of scalable microservices, caching, and message queues. Event-Driven Architecture: Experience with Kafka, Redis Streams, or AWS SNS/SQS (preferred). Cost Optimization: Ability to build cold-start strategy runners and enable cloud auto-scaling. 0–3 years of experience. Nice-to-Haves Experience with real-time or high-frequency trading systems. Familiarity with broker integrations and exchange APIs (e.g., Zerodha, Dhan). Understanding of IAM, role-based access control systems, and multi-region deployments. Educational background from Tier-I or Tier-II institutions with strong CS fundamentals, passion for scalable infrastructure, and a drive to build cutting-edge fintech systems. What We Offer Opportunity to shape the core DevOps and infrastructure for a next-generation fintech product. Exposure to real-time strategy execution, backtesting systems, and quantitative modeling. Competitive compensation with performance-based bonuses. Remote-friendly culture with async-first communication. Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna. Show more Show less

Posted 2 days ago

Apply

0.0 - 5.0 years

0 Lacs

Chetput, Chennai, Tamil Nadu

On-site

Indeed logo

Job Description: Azure Infrastructure Engineer Exp: 7+ Years CTC: 20 LPA Notice period: Immediate – 15days Base Location: Chennai (Onsite - Saudi Arabia (KSA)) Profile source: Anywhere in India Timings: 1:00pm-10:00pm Work Mode: WFO (Mon-Fri) We are looking for an Azure Infrastructure Engineer with 3–5 years of experience who understands cloud architecture and security best practices aligned with the Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM). The candidate will be responsible for designing, implementing, and managing secure and scalable infrastructure on Microsoft Azure, ensuring compliance with CSA security principles and regulatory standards. Key Responsibilities: Design and deploy Azure infrastructure with a security-first mindset, aligned with CSA CCM and Azure Well- Architected Framework. Implement identity and access controls (RBAC, Azure AD, MFA, Conditional Access) as per CSA IAM domain. Ensure data protection using Azure encryption capabilities (at-rest, in-transit, and in-use). Deploy network security architectures (NSGs, Azure Firewall, Private Link, ExpressRoute) compliant with CSA and NIST guidelines. Enable security monitoring and incident response with Azure Defender, Sentinel, and Security Center. Map and document infrastructure against CSA CCM controls. Ensure infrastructure is compliant with CIS Benchmarks, ISO 27001, and CSA STAR guidelines. Automate infrastructure provisioning with ARM templates, Bicep, or Terraform, integrating security guardrails. Perform periodic vulnerability assessments and remediation aligned with CSA guidelines. Required Skills & Qualifications: 3–5 years of experience in Azure cloud infrastructure. Strong hands-on experience in Azure IaaS (VMs, VNETs, Storage, Load Balancers, etc.). In-depth knowledge of Azure security tools (Azure Security Center, Defender for Cloud, Sentinel). Familiarity with Cloud Security Alliance (CSA) Cloud Controls Matrix (CCM) and CAIQ. Strong understanding of identity and access management principles. Proficient in scripting (PowerShell, Azure CLI) and IaC (ARM/Bicep/Terraform). Experience working in regulated industries (e.g., healthcare, finance) is a plus. Certifications (Preferred): Microsoft Certified: Azure Security Engineer Associate (AZ-500) Microsoft Certified: Azure Solutions Architect Expert CSA CCSK (Certificate of Cloud Security Knowledge) or CCSP Soft Skills: Excellent documentation and communication skills. Ability to translate compliance requirements into technical controls. Strong collaboration skills with security, operations, and compliance teams. Job Type: Full-time Pay: From ₹60,000.00 per month Schedule: Night shift Supplemental Pay: Performance bonus Ability to commute/relocate: Chetput, Chennai, Tamil Nadu: Reliably commute or planning to relocate before starting work (Required) Experience: total work: 5 years (Preferred) Work Location: In person

Posted 2 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Veeam, the #1 global market leader in data resilience, believes businesses should control all their data whenever and wherever they need it. Veeam provides data resilience through data backup, data recovery, data portability, data security, and data intelligence. Based in Seattle, Veeam protects over 550,000 customers worldwide who trust Veeam to keep their businesses running. We’re looking for a Platform Engineer to join the Veeam Data Cloud. The mission of the Platform Engineering team is to provide a secure, reliable, and easy to use platform to enable our teams to build, test, deploy, and monitor the VDC product. This is an excellent opportunity for someone with cloud infrastructure and software development experience to build the world’s most successful, modern, data protection platform. Your tasks will include: Write and maintain code to automate our public cloud infrastructure, software delivery pipeline, other enablement tools, and internally consumed platform services Document system design, configurations, processes, and decisions to support our async, distributed team culture Collaborate with a team of remote engineers to build the VDC platform Work with a modern technology stack based on containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain On-call rotation for product operations Technologies we work with: Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, etc. What we expect from you: 3+ years of experience in production operations for a SaaS (Software as a Service) or cloud service provider Experience automating infrastructure through code using technologies such as Pulumi or Terraform Experience with GitHub Actions Experience with a breadth and depth of public cloud services Experience building and supporting enterprise SaaS products Understanding of the principles of operational excellence in a SaaS environment. Possessing scripting skills in languages like Bash or Python Understanding and experience implementing secure design principles in the cloud Demonstrated ability to learn new technologies quickly and implement those technologies in a pragmatic manner A strong bias toward action and direct, frequent communication A university degree in a technical field Will be an advantage: Experience with Azure Experience with high-level programming languages such as Go, Java, C/C++, etc. We offer: Family Medical Insurance Annual flexible spending allowance for health and well-being Life insurance Personal accident insurance Employee Assistance Program A comprehensive leave package, including parental leave Meal Benefit Pass Transportation Allowance Monthly Daycare Allowance Veeam Care Days – additional 24 hours for your volunteering activities Professional training and education, including courses and workshops, internal meetups, and unlimited access to our online learning platforms (Percipio, Athena, O’Reilly) and mentoring through our MentorLab program Please note: If the applicant is permanently located outside India, Veeam reserves the right to decline the application. #Hybrid Veeam Software is an equal opportunity employer and does not tolerate discrimination in any form on the basis of race, color, religion, gender, age, national origin, citizenship, disability, veteran status or any other classification protected by federal, state or local law. All your information will be kept confidential. Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice. The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes. By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice. Show more Show less

Posted 2 days ago

Apply

12.0 - 20.0 years

6 - 16 Lacs

Pune

Work from Office

Naukri logo

Role & responsibilities Preferred candidate profil We are seeking a skilled and results-driven Azure DevOps Engineer with hands-on experience in Azure cloud services, Infrastructure as Code (IaC) using Terraform and/or Bicep, and modern DevOps practices. You will play a key role in designing, implementing, and maintaining scalable, secure, and automated cloud infrastructure. # Responsibilities - Design, build, and maintain Azure infrastructure using Infrastructure as Code (Terraform and/or Bicep). - Develop and manage CI/CD pipelines using Azure DevOps or GitHub Actions to automate build, test, and deployment processes. - Collaborate with architects, developers, and security teams to implement best practices for cloud infrastructure, security, and compliance. - Manage Azure resources (VMs, Networking, Storage, AKS, App Services, etc.) with automation and IaC. - Monitor, troubleshoot, and optimize infrastructure for performance, reliability, and cost. - Implement security controls and policies (Identity, RBAC, Key Vault, firewalls, etc.) in Azure environments. - Maintain documentation for infrastructure, procedures, and standards. - Participate in on-call rotation and incident response as needed. #Required Skills & Qualifications - Hands-on experience with Azure DevOps Architect (IaaS, PaaS, networking, security). - Strong proficiency with Terraform and/or Bicep for infrastructure automation. - Experience with Azure DevOps, GitHub Actions, or equivalent CI/CD platforms. - Proficient in scripting languages (e.g., PowerShell, Bash). - Solid understanding of networking, security, and identity concepts in cloud environments. - Experience with version control systems (Git). - Familiarity with monitoring tools (Azure Monitor, Log Analytics). - Strong troubleshooting and analytical skills. - Excellent communication and teamwork abilities. Preferred/Bonus Skills - Azure certifications (e.g., AZ-104, AZ-400, AZ-305). - Knowledge of other cloud platforms (Azure). e

Posted 2 days ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karnātaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less

Posted 2 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies