Home
Jobs

9125 Terraform Jobs - Page 9

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About Foxit Foxit is a global software company reshaping how the world interacts with documents. With over 700 million users worldwide, we offer cutting-edge PDF, collaboration, and e-signature solutions across desktop, mobile, and cloud platforms. As we expand our SaaS and cloud-native capabilities, we're seeking a technical leader who thrives in distributed environments and can bridge the gap between development and operations at global scale. Role Overview As a Senior Development Support Engineer , you will serve as a key technical liaison between Foxit’s global production environments and our China-based development teams. Your mission is to ensure seamless cross-border collaboration by investigating complex issues, facilitating secure and compliant debugging workflows, and enabling efficient delivery through modern DevOps and cloud infrastructure practices. This is a hands-on, hybrid role requiring deep expertise in application development, cloud operations, and diagnostic tooling. You'll work across production environments to maintain business continuity, support rapid issue resolution, and empower teams working under data access and sovereignty constraints. Key Responsibilities Cross-Border Development Support Investigate complex, high-priority production issues inaccessible to China-based developers. Build sanitized diagnostic packages and test environments to enable effective offshore debugging. Lead root cause analysis for customer-impacting issues across our Java and PHP-based application stack. Document recurring patterns and technical solutions to improve incident response efficiency. Partner closely with China-based developers to maintain architectural alignment and system understanding. Cloud Infrastructure & DevOps Manage containerized workloads (Docker/Kubernetes) in AWS and Azure; optimize performance and cost. Support deployment strategies (blue-green, canary, rolling) and troubleshoot CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI). Implement and manage Infrastructure as Code using Terraform (multi-cloud), with CloudFormation or ARM Templates as a plus. Support observability through tools like New Relic, CloudWatch, Azure Monitor, and log aggregation systems. Automate environment provisioning, monitoring, and diagnostics using Python, Bash, and PowerShell. Collaboration & Communication Translate production symptoms into actionable debugging tasks for teams without access to global environments. Work closely with database, QA, and SRE teams to resolve infrastructure or architectural issues. Ensure alignment with global data compliance policies (SOC2, NSD-104, GDPR) when sharing data across borders. Communicate technical issues and resolutions clearly to both technical and non-technical stakeholders. Qualifications Technical Skills Languages: Advanced in Java and PHP (Spring Boot, YII); familiarity with JavaScript a plus. Architecture: Experience designing and optimizing backend microservices and APIs. Cloud Platforms: Hands-on with AWS (EC2, Lambda, RDS) and Azure (VMs, Functions, SQL DB). Containerization: Docker & Kubernetes (EKS/AKS); Helm experience a plus. IaC & Automation: Proficient in Terraform; scripting with Python/Bash. DevOps: Familiar with modern CI/CD pipelines; automated testing (Cypress, Playwright). Databases & Messaging: MySQL, MongoDB, Redis, RabbitMQ. Professional Experience Minimum 6+ years of full-stack or backend development experience in high-concurrency systems. Strong understanding of system design, cloud infrastructure, and global software deployment practices. Experience working in global, distributed engineering teams with data privacy or access restrictions. Preferred Exposure to compliance frameworks (SOC 2, GDPR, NSD-104, ISO 27001, HIPAA). Familiarity with cloud networking, CDN configuration, and cost optimization strategies. Tools experience with Postman, REST Assured, or security testing frameworks. Language: Fluency in English; Mandarin Chinese is a strong plus. Why Foxit? Work at the intersection of development and operations on a global scale. Be a trusted technical enabler for distributed teams facing real-world constraints. Join a high-impact team modernizing cloud infrastructure for enterprise-grade document solutions. Competitive compensation, professional development programs, and a collaborative culture. Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview * Core Technology Infrastructure (CTI), part of the Global Technology & Operations organization, consists of more than 6,600 employees worldwide. With a presence in more than 35 countries, TI designs, builds and operates end-to-end technology infrastructure solutions and manages critical systems and platforms across the bank. TI delivers industry-leading infrastructure products and services to the company’s employees, customers and clients around jn87uthe world. Job Description* Terraform Software Developer – Candidate would be responsible for development for automation tools focused on Terraform Enterprise. Experience should include Terraform development and administration (back end of platform), System Administration (primarily Linux), integration with other automation tools like Horizon, Ansible Platform and GitHub. Understanding of SDLC processes and tools. Experience with cloud infrastructure as code, API’s, YAML, HCL, Python. Role also requires operational experience with monitoring of systems, incident, and problem management. Responsibilities * Experience on using Terraform. Review bitbucket feature files, branching strategy, maintain bitbucket branches. Evaluate services of Azure & AWS and use Terraform to develop modules. Improve and optimize deployment challenges and help in delivering reliable solution. Interact with technical leads and architects to discover solutions that help solve challenges faced by Product Engineering teams. Be part of an enriching team and solve real Production engineering challenges. Improve knowledge in the areas of DevOps & Cloud Engineering by using enterprise tools and contributing to projects success. Programming or scripting skills in Python/Powershell. Any related Certification on cloud is nice to have. Ensure that all system deliverables meet quality objectives in functionality, performance, stability, security accessibility, and data quality. Provide work breakdown and estimates for tasks on agreed scope and development milestones to meet overall project timelines. Experience with the Agile/Scrum methodology. Strong verbal and written communication skills. Highly detailed oriented. Self-motivated, with the ability to work independently and as part of a team. Strong willingness & comfort taking on and challenging development approaches. Strong analytical and communication skills, ability to effectively work with both technical and non-technical resources. Must have strong debugging and troubleshooting skills. Able to implement and maintain Continuous Integration/Delivery (CI/CD) pipelines for the services. Able to implement and maintain automation required to improve code logistics from development to production. Assisting the team in instrumenting code for system availability. Maintaining and upgrading the deployment platforms as well as system infrastructure with Infrastructure-as-Code tools. Performing system administration and adhoc duties. Requirements: Education* B.E. / B.Tech / M.E. / M.Tech / MCA Experience Range* 8+ years Foundational Skills* Terraform development experience Terraform Enterprise Administration/Operations GO Language Java or Dotnet programming knowledge Python or shell scripting Database query development experience Desired Skills* AWS Change Management Horizon Tools (Ansible, Jira, Confluence, BitBucket) CI/CD Tools (GitHub, Jenkins, Artifactory) GCP JIRA Agile Methodology Python Powershell HashiCorp Configuration Language (HCL) Infrastructure as Code (IaC) Cloud Integration (Azure, AWS, GCP) Linux Administration Site Reliability Engineering Work Timings* 10.30AM to 7.30 PM Job Location* Chennai, Hyderabad, Mumbai Show more Show less

Posted 1 day ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Site Reliability Engineering (SRE) at Equifax is a discipline that combines software and systems engineering for building and running large-scale, distributed, fault-tolerant systems. SRE ensures that internal and external services meet or exceed reliability and performance expectations while adhering to Equifax engineering principles. SRE is also an engineering approach to building and running production systems – we engineer solutions to operational problems. Our SREs are responsible for overall system operation and we use a breadth of tools and approaches to solve a broad set of problems. Practices such as limiting time spent on operational work, blameless postmortems, proactive identification, and prevention of potential outages. Our SRE culture of diversity, intellectual curiosity, problem solving and openness is key to its success. Equifax brings together people with a wide variety of backgrounds, experiences and perspectives. We encourage them to collaborate, think big, and take risks in a blame-free environment. We promote self-direction to work on meaningful projects, while we also strive to build an environment that provides the support and mentorship needed to learn, grow and take pride in our work. What You’ll Do Work in a DevSecOps environment responsible for the building and running of large-scale, massively distributed, fault-tolerant systems. Work closely with development and operations teams to build highly available, cost effective systems with extremely high uptime metrics. Work with cloud operations team to resolve trouble tickets, develop and run scripts, and troubleshoot Create new tools and scripts designed for auto-remediation of incidents and establishing end-to-end monitoring and alerting on all critical aspects Build infrastructure as code (IAC) patterns that meets security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Participate in a team of first responders in a 24/7, follow the sun operating model for incident and problem management. What Experience You Need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 2-5 years of experience in software engineering, systems administration, database administration, and networking. 1+ years of experience developing and/or administering software in public cloud Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives. Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Proficiency with continuous integration and continuous delivery tooling and practices Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: DevSecOps - Uses knowledge of DevSecOps operational practices and applies engineering skills to improve resilience of products/services. Designs, codes, verifies, tests, documents, modifies programs/scripts and integrated software services. Applies agreed SRE standards and tools to achieve a well-engineered result. Operational Excellence - Prioritizes and organizes one’s own work. Monitors and measures systems against key metrics to ensure availability of systems. Identifies new ways of working to make processes run smoother and faster. Systems Thinking - Uses knowledge of best practices and how systems integrate with others to improve their own work. Understand technology trends and use knowledge to identify factors that achieve the defined expectations of systems availability. Technical Communication/Presentation - Explains technical information and the impacts to stakeholders and articulates the case for action. Demonstrates strong written and verbal communication skills. Troubleshooting - Applies a methodical approach to routine issue definition and resolution. Monitors actions to investigate and resolve problems in systems, processes and services. Determines problem fixes/remedies. Assists with the implementation of agreed remedies and preventative measures. Analyzes patterns and trends. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms. Show more Show less

Posted 1 day ago

Apply

4.0 - 6.0 years

27 - 42 Lacs

Chennai

Work from Office

Naukri logo

Skill – Aks , Istio service mesh ,CICD Shift timing - Afternoon Shift Location - Chennai, Kolkata, Bangalore Excellent AKS, GKE or Kubernetes admin experience. Good troubleshooting experience on istio service mesh, connectivity issues. Experience with Github Actions or similar ci/cd tool to build pipelines.Working experience on any cloud, preferably Azure, Google with good networking knowledge. Experience on python or shell scripting. Experience on building dashboards, configure alerts using prometheus and Grafana.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Banyan Software provides the best permanent home for successful enterprise software companies, their employees, and customers. We are on a mission to acquire, build and grow great enterprise software businesses all over the world that have dominant positions in niche vertical markets. In recent years, Banyan was named the #1 fastest-growing private software company in the US on the Inc. 5000 and amongst the top 10 fastest-growing companies by the Deloitte Technology Fast 500. Founded in 2016 with a permanent capital base setup to preserve the legacy of founders, Banyan focuses on a buy and hold for life strategy for growing software companies that serve specialized vertical markets. About Campus Café Our student information system is an integrated SIS that manages the entire student life cycle including Admissions, Student Services, Business Office, Financial Aid, Alumni Development and Career Tracking functions. Our SIS is a single database student information system that allows clients to manage marketing, recruitment, applications, course registration, billing, transcripts, financial aid, career tracking, alumni development, fundraising, student attendance and class rosters. It allows real-time access to data that is more accurate and available when our users it. Our SaaS model means clients don’t need to build and maintain an expensive and complex IT infrastructure. Our APIs and custom integrations will keep all their data in sync and accessible in real-time. Since the database is fully integrated, everything is updated in real-time and there’s no waiting for information. Position Overview We are looking for a versatile System Administrator / DevOps Engineer to support and enhance our Azure-hosted infrastructure, running Java applications on Tomcat, backed by Microsoft SQL Server on Windows servers. The ideal candidate will have a solid background in Windows system administration, hands-on experience with Azure services, and a DevOps mindset focused on automation, reliability, and performance. Key Responsibilities Manage and maintain Windows Server environments hosted in Azure. Support the deployment, configuration, and monitoring of Java applications running on Apache Tomcat. Administer Microsoft SQL Server, including performance tuning, backups, and availability in Azure. Automate infrastructure tasks, such as java and tomcat upgrades, using PowerShell, Azure CLI, or Azure Automation. Build and maintain CI/CD pipelines for Java-based applications using tools such as Jenkins, or GitHub Actions. Manage/monitor Azure resources: Virtual Machines, Azure SQL, App Services, Azure Monitor, and Networking (App Gateway, Firewall, VNets, NSGs, VPN). Implement and monitor backup, recovery, and security policies within the Azure environment. Collaborate with development and operations teams to optimize deployment strategies and system performance. Troubleshoot issues across systems, applications, and cloud services. Required Skills & Experience 3+ years of experience in system administration or DevOps, with a focus on Windows environments. Experience deploying and managing Java applications on Tomcat. Strong knowledge of Microsoft SQL Server (on-prem and/or Azure-hosted). Solid experience with Azure IaaS and PaaS services (e.g., Azure VMs, Azure SQL, Azure Monitor, Azure Storage). Proficiency in scripting and automation (PowerShell, Azure CLI, or similar). Familiarity with CI/CD tools such as Azure DevOps, Jenkins, or GitHub Actions. Understanding of networking, security groups, and VPNs in a cloud context. Preferred Skills Experience with Azure Infrastructure as Code (e.g., ARM templates, Bicep, or Terraform). Familiarity with Azure Active Directory, RBAC, and Identity & Access Management. Experience with containerization (Docker) and/or orchestration (AKS) is a plus. Microsoft Azure certifications (AZ-104, AZ-400) or equivalent experience. Diversity, Equity, Inclusion & Equal Employment Opportunity at Banyan: Banyan affirms that inequality is detrimental to our Global Teams, associates, our Operating Companies, and the communities we serve. As a collective, our goal is to impact lasting change through our actions. Together, we unite for equality and equity. Banyan is committed to equal employment opportunities regardless of any protected characteristic, including race, color, genetic information, creed, national origin, religion, sex, affectional or sexual orientation, gender identity or expression, lawful alien status, ancestry, age, marital status, or protected veteran status and will not discriminate against anyone on the basis of a disability. We support an inclusive workplace where associates excel based on personal merit, qualifications, experience, ability, and job performance. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Mohali district, India

On-site

Linkedin logo

Who we are Millipixels Interactive is an experience-led, interactive solutions company that collaborates with startups and enterprise clients to deliver immersive brand experiences and transformational technology projects. Our Offshore Innovation Center model allows clients to leverage cost differentiators and innovation to redefine what's possible. With a collaborative and detail-oriented approach, we provide value in every engagement. Key Responsibilities Design, build and manage highly automated delivery pipelines using the approach of Infrastructure as Code Design right scale cloud solutions that address scalability availability service continuity DR performance and security requirements Guide customers and product teams to make well-informed decisions on DevOps tooling and implementation recommendations Work with the product team and deploy applications on the cloud using blue-green or brown-field deployments Lead and participate in security reviews, audits, risk assessments, vulnerability assessments Evaluate, select, design, and configure security infrastructure systems in a global environment. Support and conduct internal audits, help mitigate findings and implement improvement measures. Identify, integrate, monitor, and improve infosec controls by understanding business processes. Enhance the security direction for the organization including systems, networks, user services, and vendor development efforts. Develop new standards as necessary Troubleshoot security system and related issues. Monitors and measures performance characteristics/health of applications. Real-time monitoring of infra and system signals Handle manual and repetitive maintenance tasks that are technical in nature Troubleshoot Infra, analyze logs and apply fixes not related to a code change Required Skills and Experience: Experience in handling DevOps work driven mainly in the cloud (Google, AWS, Azure) environment Should have an excellent understanding of DevOps principles Experience in the introduction of Infrastructure-as-Code (IaC) and Policy-as-Code (PaC) Experience with Git and Release process. Experience in setting up CI/CD pipelines. Experience in Infrastructure as code development using Terraform Proficient in containerization & deployment management, Docker & Kubernetes Hands-on experience deploying microservices and other web applications Experience with Ansible or an equivalent CM tool. Experience with modern monitoring solutions including but not limited to Elasticsearch/Kibana/Grafana/Prometheus stack Strong knowledge of any of the Code Review and Security tools Excellent written and verbal communication skills for coordination across Product teams and Customers Experience with Agile development, Azure DevOps platform, Azure Policy, and Azure Blueprints. Exposure to MLOps, deploying and monitoring ML models to production will be plus Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Kochi

Work from Office

Naukri logo

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Non-Degree Program Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

As a DevOps Developer for the IBM Cloud Object Storage Service, you will play a pivotal role in enhancing the developer experience, productivity, and satisfaction within the organization. Your primary responsibilities will include: Collaborating with development teams to understand their needs and provide tailored solutions that align with the organization's goals and objectives. Designing and implementing Continuous Integration and Continuous Deployment (CI/CD) pipelines using tools like Jenkins, Tekton, etc. Designing and implementing tools for automated deployment and monitoring of multiple environments, ensuring seamless integration and scalability. Staying updated with the latest trends and best practices in DevOps and related technologies, and incorporating them into the development platform. Ensuring security and compliance of the platforms, including patching, vulnerability detection, and threat mitigation. Providing on-call IT support and monitoring technical operations to maintain the stability and reliability of the developer platform. Collaborating with other teams to introduce best automation practices and tools, fostering a culture of innovation and continuous improvement. Embracing an Agile culture and employing relevant fit-for-purpose methodologies and tools such as Trello, GitHub, Jira, etc. Maintaining good communication skills and the ability to lead global teams remotely, ensuring effective collaboration and knowledge sharing. Implement and automate infrastructure solutions that support IBM Cloud products and infrastructure Implement, and maintain state-of-the-art CI/CD pipelines, ensuring full compliance with industryImplement, and maintain state-of-the-art CI/CD pipelines, ensuring full compliance with industry standards and regulatory frameworks. Administer automated CI/CD systems and tools Partner with other teams, managers and program managers to develop alerting and monitoring for mission-critical services Provide technical escalation support for other Infrastructure Operations team Maintain highly scalable, secure cloud infrastructures leveraging industry-leading platforms such as AWS, Azure, or GCP. Orchestrate and manage infrastructure as code (IaC) implementations using cutting-edge tools like Terraform Required education Bachelor's Degree Required technical and professional expertise Proven Experience: Demonstrated track record of success as a Site Reliability Engineer or in a similar role. System Monitoring and Troubleshooting: Strong skills in system monitoring, issue response, and troubleshooting for optimal system performance. Automation Proficiency: Proficiency in automation for production environment changes, streamlining processes for efficiency. Collaborative Mindset: Collaborative mindset with the ability to partner seamlessly with cross-functional teams for shared success. Effective Communication Skills:Excellent communication skills, essential for effective integration planning and swift issue resolution.Tech Stack Jenkins LInux Administration Python Ansible Golang Terraform Preferred Professional and Technical Expertise Programming skills scripting, Go, Python Must be proficient in writing, debugging, and maintaining automation,scripts, and code (ie, Bash, Ansible, and Python, Java or Golang Ability to administrate, configure, optimize and monitor services and/or servers at scale. Strong understanding of scalability, reliability, and performance principles.

Posted 1 day ago

Apply

89.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Linkedin logo

Company Description GFK - Growth from Knowledge. For over 89 years, we have earned the trust of our clients around the world by solving critical questions in their decision-making process. We fuel their growth by providing a complete understanding of their consumers’ buying behavior, and the dynamics impacting their markets, brands and media trends. In 2023, GfK combined with NIQ, bringing together two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights - delivered with advanced analytics through state-of-the-art platforms - GfK drives “Growth from Knowledge”. Job Description It's an exciting time to be a builder. Constant technological advances are creating an exciting new world for those who understand the value of data. The mission of NIQ’s Media Division is to turn NIQ into the global leader that transforms how consumer brands plan, activate and measure their media activities. Recombine is the delivery area focused on maximising the value of data assets in our NIQ Media Division. We apply advanced statistical and machine learning techniques to unlock deeper insights, whilst integrating data from multiple internal and external sources. Our teams develop data integration products across various markets and product areas, delivering enriched datasets that power client decision-making. Role Overview We are looking for a Principal Software Engineer for our Recombine delivery area to provide technical leadership within our development teams, ensuring best practices, architectural coherence, and effective collaboration across projects. This role is ideal for a highly experienced engineer who can bridge the gap between data engineering, data science, and software engineering, helping teams build scalable, maintainable, and well-structured data solutions. As a Principal Software Engineer, you will play a hands-on role in designing and implementing solutions while mentoring developers, influencing technical direction, and driving best practices in software and data engineering. This role includes line management responsibilities, ensuring the growth and development of team members. The role will be working within an AWS environment, leveraging the power of cloud-native technologies and modern data platforms Key Responsibilities Technical Leadership & Architecture Act as a technical architect, ensuring alignment between the work of multiple development teams in data engineering and data science. Design scalable, high-performance data processing solutions within AWS, considering factors such as governance, security, and maintainability. Drive the adoption of best practices in software development, including CI/CD, testing strategies, and cloud-native architecture. Work closely with Product Owners to translate business needs into technical solutions. Hands-on Development & Technical Excellence Lead by example through high-quality coding, code reviews, and proof-of-concept development. Solve complex engineering problems and contribute to critical design decisions. Ensure effective use of AWS services, including AWS Glue, AWS Lambda, Amazon S3, Redshift, and EMR. Develop and optimise data pipelines, data transformations, and ML workflows in a cloud environment. Line Management & Team Development Provide line management to engineers, ensuring their professional growth and development. Conduct performance reviews, set development goals, and mentor team members to enhance their skills. Foster a collaborative and high-performing engineering culture, promoting knowledge sharing and continuous improvement beyond team boundaries. Support hiring, onboarding, and career development initiatives within the engineering team. Collaboration & Cross-Team Coordination Act as the technical glue between data engineers, data scientists, and software developers, ensuring smooth integration of different components. Provide mentorship and guidance to developers, helping them level up their skills and technical understanding. Work with DevOps teams to improve deployment pipelines, observability, and infrastructure as code. Engage with stakeholders across the business, translating technical concepts into business-relevant insights. Governance, Security & Data Best Practices Champion data governance, lineage, and security across the platform. Advocate for and implement scalable data architecture patterns, such as Data Mesh, Lakehouse, or event-driven pipelines. Ensure compliance with industry standards, internal policies, and regulatory requirements. Qualifications Requirements & Experience Strong software engineering background with experience in designing and building production-grade applications in Python, Scala, Java, or similar languages. Proven experience with AWS-based data platforms, specifically AWS Glue, Redshift, Athena, S3, Lambda, and EMR. Expertise in Apache Spark and AWS Lake Formation, with experience building large-scale distributed data pipelines. Experience with workflow orchestration tools like Apache Airflow or AWS Step Functions. Cloud experience in AWS, including containerisation (Docker, Kubernetes, ECS, EKS) and infrastructure as code (Terraform, CloudFormation). Strong knowledge of modern software architecture, including microservices, event-driven systems, and distributed computing. Experience leading teams in an agile environment, with a strong understanding of CI/CD pipelines, automated testing, and DevOps practices. Excellent problem-solving and communication skills, with the ability to engage with both technical and non-technical stakeholders. Proven line management experience, including mentoring, career development, and performance management of engineering teams. Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

Company Description Quantanite is a customer experience (CX)solutions company that helpsfast-growing companies and leading global brandsto transformand grow. We do thisthrough a collaborative and consultative approach,rethinking business processes and ensuring our clients employ the optimalmix of automationand human intelligence.We are an ambitiousteamof professionals spread acrossfour continents and looking to disrupt ourindustry by delivering seamless customerexperiencesforour clients,backed-upwithexceptionalresults.We havebig dreams, and are constantly looking for new colleaguesto join us who share our values, passion and appreciationfordiversity. Job Description About the Role As a DevOps Engineer you will work closely with our global teams to learn about the business and technical requirements and formulate the necessary infrastructure and resource plans to properly support the growth and maintainability of various systems. Key Responsibilities Implement a diverse set of development, testing, and automation tools, as well as manage IT infrastructure. Plan the team structure and activities, and actively participate in project management. Comprehend customer requirements and project Key Performance Indicators (KPIs). Manage stakeholders and handle external interfaces effectively. Set up essential tools and infrastructure to support project development. Define and establish DevOps processes for development, testing, release, updates, and support. Possess the technical expertise to review, verify, and validate software code developed in the project. Engage in software engineering tasks, including designing and developing systems to enhance reliability, scalability, and operational efficiency through automation. Collaborate closely with agile teams to ensure they have the necessary tools for seamless code writing, testing, and deployment, promoting satisfaction among development and QA teams. Monitor processes throughout their lifecycle, ensuring adherence, identifying areas for improvement, and minimizing wastage. Advocate and implement automated processes whenever feasible. Identify and deploy cybersecurity measures by continuously performing vulnerability assessments and managing risk. Handle incident management and conduct root cause analysis for continuous improvement. Coordinate and communicate effectively within the team and with customers. Build and maintain continuous integration (CI) and continuous deployment (CD) environments, along with associated processes and tools. Qualifications About the Candidate Proven 5 years of experience with Linux based infrastructure and proficient in scripting language. Must have solid cloud computing skills such as network management, cloud computing and cloud databases in any one of the public clouds (AWS, Azure or GCP) Must have hands-on experience in setting up and managing cloud infrastructure like Kubernetes, VPC, VPN, Virtual Machines, Cloud Databases etc. Experience in IAC (Infrastructure as Code) tools like Ansible, Terraform. Must have hands-on experience in coding and scripting in at least one of the following: Shell, Python, Groovy Experience as a DevOps Engineer or similar software engineering role. Experienced in establishing an optimized CI / CD environment relevant to the project. Automation using scripting language like Perl/python and shell scripts like BASH and CSH. Good knowledge of configuration and building tools like Bazel, Jenkins etc. Good knowledge of repository management tools like Git, Bit Bucket etc. Good knowledge of monitoring solutions and generating insights for reporting Excellent debugging skills/strategies. Excellent communication skills. Experienced in working in an Agile environment. Additional Information Benefits At Quantanite, we ask a lot of our associates, which is why we give so much in return. In addition to your compensation, our perks include: Dress: Wear anything you like to the office. We want you to feel as comfortable as when working from home. Employee Engagement: Experience our family community and embrace our culture where we bring people together to laugh and celebrate our achievements. Professional development: We love giving back and ensure you have opportunities to grow with us and even travel on occasion. Events: Regular team and organisation-wide get-togethers and events. Value orientation: Everything we do at Quantanite is informed by our Purpose and Values. We Build Better. Together. Future development At Quantanite, youʼll have a personal development plan to help you improve in the areas youʼre looking to develop in over the coming years. Your manager will dedicate time and resources to supporting you in getting you to the next level. Youʼll also have the opportunity to progress internally. As a fast growing organisation, our teams are growing, and youʼll have the chance to take on more responsibility over time. So, if youʼre looking for a career full of purpose and potential, weʼd love to hear from you! Show more Show less

Posted 1 day ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Noida

Work from Office

Naukri logo

"Ensure platform reliability and performance: Monitor, troubleshoot, and optimize production systems running on Kubernetes (EKS, GKE, AKS). Automate operations: Develop and maintain automation for infrastructure provisioning, scaling, and incident response. Incident response & on-call support: Participate in on-call rotations to quickly detect, mitigate, and resolve production incidents. Kubernetes upgrades & management: Own and drive Kubernetes version upgrades, node pool scaling, and security patches. Observability & monitoring: Implement and refine observability tools (Datadog, Prometheus, Splunk, etc.) for proactive monitoring and alerting. Infrastructure as Code (IaC): Manage infrastructure using Terraform, Terragrunt, Helm, and Kubernetes manifests. Cross-functional collaboration: Work closely with developers, DBPEs (Database Production Engineers), SREs, and other teams to improve platform stability. Performance tuning: Analyze and optimize cloud and containerized workloads for cost efficiency and high availability. Security & compliance: Ensure platform security best practices, incident response, and compliance adherence.." Required education None Preferred education Bachelor's Degree Required technical and professional expertise Strong expertise in Kubernetes (EKS, GKE, AKS) and container orchestration. Experience with AWS, GCP, or Azure, particularly in managing large-scale cloud infrastructure. Proficiency in Terraform, Helm, and Infrastructure as Code (IaC). Strong understanding of Linux systems, networking, and security best practices. Experience with monitoring & logging tools (Datadog, Splunk, Prometheus, Grafana, ELK, etc.). Hands-on experience with automation & scripting (Python, Bash, or Go). Preferred technical and professional experience Experience in incident management & debugging complex distributed systems. Familiarity with CI/CD pipelines and release automation.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Role: Technical Architect Experience: 8-15 years Location: Bangalore, Chennai, Gurgaon, Pune, and Kolkata Mandatory Skills: Python, Pyspark, SQL, ETL, Pipelines, Azure Databricks, Azure Data Factory, & Architect Designing. Primary Roles and Responsibilities: Developing Modern Data Warehouse solutions using Databricks and AWS/ Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Help joiner team members to resolve issues and technical challenges. Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications: Bachelor's and/or master’s degree in computer science or equivalent experience. Must have total 8+ yrs. of IT experience and 5+ years' experience in Data warehouse/ETL projects. Deep understanding of Star and Snowflake dimensional modelling. Strong knowledge of Data Management principles Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Candidate must have experience in AWS/ Azure stack Desirable to have ETL with batch and streaming (Kinesis). Experience in building ETL / data warehouse transformation processes Experience with Apache Kafka for use with streaming data / event-based data Experience with other Open-Source big data products Hadoop (incl. Hive, Pig, Impala) Experience with Open Source non-relational / NoSQL data repositories (incl. MongoDB, Cassandra, Neo4J) Experience working with structured and unstructured data including imaging & geospatial data. Experience working in a Dev/Ops environment with tools such as Terraform, CircleCI, GIT. Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Show more Show less

Posted 1 day ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Company: Mercer Description: About the role Location Gurugram Functional Area Software Engineering Education Qualification BTech/MTech from tier 1 colleges Experience: 8+ years Key Responsibilities Own and deliver complete features across the development lifecycle, including design, architecture, implementation, testability, debugging, shipping, and servicing. Write and review clean, well-thought-out code with an emphasis on quality, performance, simplicity, durability, scalability, and maintainability Performing data analysis to identify opportunities to optimize services Leading discussions for the architecture of products/solutions, refine code plans Working on research and development in cutting edge accelerations and optimizations Mentoring junior team members in their growth and development Collaborating with Product Managers, Architects, and UX Designers on new features. Core Technology skills - Java/J2EE, Full stack development, Python, Micro services, , SQL/NO SQL Databases, Cloud (AWS), API development and other open source technologies 8+years experiencebuilding highly available distributed systems at scale Configuration Management (Terraform, Chef, Puppet or Ansible) Problem-solving skills to determine the cause of bugs and resolve complaints Strong organizational skills, including an ability to perform under pressure and manage multiple priorities with competing demands for resources. Mercer, a business of Marsh McLennan (NYSEMMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.

Posted 1 day ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Gurugram

Work from Office

Naukri logo

Company: Mercer Description: We are seeking a talented individual to join our Technology team at Mercer. This role will be based in Gurugram. This is a hybrid role that has a requirement of working at least three days a week in the office. Senior Devops Engineer We are looking for an ideal candidate with minimum 4 years of experience in Devops. The candidate should have strong and deep understanding of Amazon Web Services (AWS) & Devops tools like Terraform, Ansible, Jenkins. LocationGurgaon Functional AreaEngineering Education QualificationGraduate/ Postgraduate Experience4-6 Years We will count on you to: Deploy infrastructure on AWS cloud using Terraform Deploy updates and fixes Build tools to reduce occurrence of errors and improve customer experience Perform root cause analysis of production errors and resolve technical issues Develop scripts to automation Troubleshooting and maintenance What you need to have: 4+ years of technical experience in devops area. Knowledge of the following technologies and applications: AWS Terraform Linux Administration, Shell Script Ansible CI ServerJenkins Apache/Nginx/Tomcat Good to have Experience in following technologies: Python What makes you stand out: Excellent verbal and written communication skills, comfortable interfacing with business users Good troubleshooting and technical skills Able to work independently Why join our team: We help you be your best through professional development opportunities, interesting work and supportive leaders. We foster a vibrant and inclusive culture where you can work with talented colleagues to create new solutions and have impact for colleagues, clients and communities. Our scale enables us to provide a range of career opportunities, as well as benefits and rewards to enhance your well-being. Mercer, a business ofMarsh McLennan (NYSEMMC),is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomesfor their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $23 billion and more than 85,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Mercer Assessments business, one of the fastest-growing verticals within the Mercer brand, is a leading global provider of talent measurement and assessment solutions. As part of Mercer, the worlds largest HR consulting firm and a wholly owned subsidiary of Marsh McLennanwe are dedicated to delivering talent foresight that empowers organizations to make informed, critical people decisions. Leveraging a robust, cloud-based assessment platform, Mercer Assessments partners with over 6,000 corporations, 31 sector skill councils, government agencies, and more than 700 educational institutions across 140 countries. Our mission is to help organizations build high-performing teams through effective talent acquisition, development, and workforce transformation strategies. Our research-backed assessments, advanced technology, and comprehensive analytics deliver transformative outcomes for both clients and their employees. We specialize in designing tailored assessment solutions across the employee lifecycle, including pre-hire evaluations, skills assessments, training and development, certification exams, competitions and more. At Mercer Assessments, we are committed to enhancing the way organizations identify, assess, and develop talent. By providing actionable talent foresight, we enable our clients to anticipate future workforce needs and make strategic decisions that drive sustainable growth and innovation. Mercer, a business of Marsh McLennan (NYSEMMC), is a global leader in helping clients realize their investment objectives, shape the future of work and enhance health and retirement outcomes for their people. Marsh McLennan is a global leader in risk, strategy and people, advising clients in 130 countries across four businessesMarsh, Guy Carpenter, Mercer and Oliver Wyman. With annual revenue of $24 billion and more than 90,000 colleagues, Marsh McLennan helps build the confidence to thrive through the power of perspective. For more information, visit mercer.com, or follow on LinkedIn and X. Marsh McLennan is committed to embracing a diverse, inclusive and flexible work environment. We aim to attract and retain the best people and embrace diversity of age, background, caste, disability, ethnic origin, family duties, gender orientation or expression, gender reassignment, marital status, nationality, parental status, personal or social status, political affiliation, race, religion and beliefs, sex/gender, sexual orientation or expression, skin color, or any other characteristic protected by applicable law. Marsh McLennan is committed to hybrid work, which includes the flexibility of working remotely and the collaboration, connections and professional development benefits of working together in the office. All Marsh McLennan colleagues are expected to be in their local office or working onsite with clients at least three days per week. Office-based teams will identify at least one anchor day per week on which their full team will be together in person.

Posted 1 day ago

Apply

4.0 - 9.0 years

6 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Title : .Net Developer(.net+openshift OR Kubernetes) | 4 to 12 years | Bengaluru & Hyderabad : Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.

Posted 1 day ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Hello Visionary! We empower our people to stay resilient and relevant in a constantly evolving world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. Does that sound like youThen it seems like you’d make a great addition to our vibrant team. We are looking for Senior SAP BTP DevOps Engineer . Before our software developers write even a single line of code, they have to understand what drives our customers. What is the environmentWhat is the user story based onImplementation means – trying, testing, and improving outcomes until a final solution emerges. Knowledge means exchange – discussions with colleagues from all over the world. Join our Digitalization Technology and Services (DTS) team based in Bangalore. You’ll make a difference by: Designing and implementing CI/CD pipelines using GitLab for SAP BTP and CAP applications Establishing infrastructure as code practices using Terraform/Terragrunt Automating deployment processes ensuring zero-downtime deployments Managing and optimizing cloud infrastructure on SAP BTP and AWS Implement monitoring, logging, and alerting solutions Providing technical leadership in DevOps best practices Collaborating with development teams to improve delivery processes Collaborating with infrastructure teams on automation of infrastructure provisioning. Collaborating with product teams to develop, maintain, and create new processes, procedures, and concepts. Maintaining tools and technologies utilized in DevOps processes. Troubleshooting DevOps systems and solve problems across platforms and application domains. Suggesting architectural, procedural, and systematic improvements based on empirical evidence. Taking strong initiatives and highly result oriented Job / Skills: 5-8 years of professional experience in software development and DevOps Strong expertise in: SAP BTP administration and deployment GitLab CI/CD pipelines Terraform/Terragrunt Infrastructure as Code (IaC) Cloud platforms (SAP BTP, AWS) PowerShell and CLI tools REST APIs Monitoring and logging tools Experience in implementing and operating AWS solutions and services with high availability, scalability, and performance Experience with building, deploying, and configuring SAP BTP, Python, Java and Angular applications Experience with CI/CD tools that build, package, and deploy applications (e.g., GitLab, Jenkins, Octopus Deploy, NuGet, Sonar) Experience with infrastructure automation tools preferred (e.g., Ansible, Terraform, PowerShell DSC). Experience in architecting serverless applications with AWS Lambda with Python Experience administering Windows Servers in production environments. Excellent command over English in written, spoken communication and strong presentation skills. Experience in Jira, Confluence or any other ALM tool will be an added advantage. Good at communicating within the team as well as with all the stake holders Strong customer focus and good learner. Highly proactive and team player Create a better #TomorrowWithUs! This role is in Bangalore, where you’ll get the chance to work with teams impacting entire cities, countries – and the craft of things to come. We’re Siemens. A collection of over 312,000 minds building the future, one day at a time in over 200 countries. All employment decisions at Siemens are based on qualifications, merit and business need. Bring your curiosity and creativity and help us craft tomorrow. At Siemens, we are always challenging ourselves to build a better future. We need the most innovative and diverse Digital Minds to develop tomorrow ‘s reality. Find out more about the Digital world of Siemens herewww.siemens.com/careers/digitalminds

Posted 1 day ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Naukri logo

Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for DevOps professionals with 5 to 8 years of experience in Cloud Infrastructure maintenance and operations. Strong hands-on experience with Azure services (Compute, Networking, Storage, and Security). Expertise in Infrastructure as Code (IaC) using Bicep / ARM / Terraform (Bicep / ARM templates experience is a plus). Proficiency in managing and optimizing CI/CD pipelines in Azure DevOps. In-depth knowledge of networking concepts (VNETs, Subnets, DNS, Load Balancers, VPNs). Proficiency in scripting with PowerShell, Azure CLI, or Python for automation. Strong knowledge of Git and version control best practices. Infrastructure Design & Management Architect and manage Azure cloud infrastructure for scalability, high availability, and cost efficiency. Deploy and maintain Azure services such as Virtual Machines, App Services, Kubernetes (AKS), Storage, and Databases. Implement networking solutions like Virtual Networks, VPN Gateways, NSGs, and Private Endpoints CI/CD Pipeline Management Design, build, and maintain Azure DevOps pipelines for automated deployments. Implement GitOps and branching strategies to streamline development workflows. Ensure efficient release management and deployment automation using Azure DevOps, GitHub Actions, or Jenkins. Infrastructure as Code (IaC) Write, maintain, and optimize Bicep / ARM / Terraform templates for infrastructure provisioning. Automate resource deployment and configuration management using Azure CLI, PowerShell etc Security & Compliance Implement Azure security best practices, including RBAC, Managed Identities, Key Vault, and Azure Policy. Monitor and enforce network security with NSGs, Azure Firewall, and DDoS protection. Ensure compliance with security frameworks such as CIS, NIST, ASB etc. Conduct security audits, vulnerability assessments, and enforce least privilege access controls. Monitoring & Optimization Set up Azure Monitor, Log Analytics, and Application Insights for performance tracking and alerting. Optimize infrastructure for cost efficiency and performance using Azure Advisor and Cost Management. Troubleshoot and resolve infrastructure-related incidents in production and staging environments. Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at www.siemens.com/careers

Posted 1 day ago

Apply

8.0 - 13.0 years

10 - 15 Lacs

Chennai

Work from Office

Naukri logo

Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for Associate Software Architect with 8+ years of experience in AWS Cloud Infrastructure design, maintenance, and operations. Key Responsibilities: Infrastructure Architecture, Design & Management Understand the existing architecture to identify and implement improvements. Design and execute the initial implementation of infrastructure. Defining end-to-end DevOps architecture aligned with business goals and technical requirements. Architect and manage AWS cloud infrastructure for scalability, high availability, and cost efficiency using services like EC2, Auto Scaling, Load Balancers, and Route 53 to ensure high availability and fault tolerance. Design and implement secure network architectures using VPCs, subnets, NAT gateways, security groups, NACLs, and private endpoints. CI/CD Pipeline Management- Design, build, test and maintain AWS DevOps pipelines for automated deployments across multiple environments (dev, staging, production). Security & Compliance-Enforce least privilege access controls to enhance security. Monitoring & Optimization-Centralize monitoring with AWS CloudWatch, CloudTrail, and third-party tools. And set up metrics, dashboards, alerts Infrastructure as Code (IaC) Write, maintain, and optimize Terraform templates/AWS CloudFormation/AWS CDK for infrastructure provisioning. Automate resource deployment across multiple environments (DEV, QA, UAT & Prod) and configuration management. Managing infrastructure lifecycle through version-controlled code Modular and reusable IaC design. License Management Use AWS License Manager to track and enforce software license usage Manage BYOL (Bring Your Own License) models for third-party tools like GraphDB Integrate license tracking with AWS Systems Manager, EC2, and CloudWatch Define custom license rules and monitor compliance across accounts using AWS Organizations Documentation & Governance Create and maintain detailed architectural documentation. Participate in code and design reviews to ensure compliance with architectural standards. Establish architectural standards and best practices for scalability, security, and maintainability across development and operations teams. Interpersonal Skills Effective communication and collaboration with stakeholders to gather and understand technical and business requirements Strong grasp of Agile and Scrum methodologies for iterative development and team coordination Mentoring and guiding DevOps engineers while fostering a culture of continuous improvement and DevOps best practices Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at www.siemens.com/careers

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About Digital.ai At Digital.ai , we are revolutionizing enterprise software delivery. A 9-time leader in the Gartner Magic Quadrant for Enterprise Agile Planning , we enable large-scale organizations to drive digital transformation through AI-powered DevSecOps. Our platform empowers over 50% of the Fortune 100 and market leaders across industries like financial services, retail, technology, manufacturing, and government. By unlocking the power of predictive insights and secure software delivery , we help our clients accelerate their innovation and stay ahead of the digital curve. About Us Digital.ai a 9-time Leader in Gartner Magic Quadrant for Enterprise Agile Planning, Digital.ai unifies, secures, and generates predictive insights across the software lifecycle for Global 2000 enterprise customers. Our mission is to unlock endless digital possibilities by harmonizing the delivery of software. Our vision is to be THE enterprise platform for AI driven software development. ( https://digital.ai/ ) Position Overview: As an Digital.ai Customer Technical Support Engineer, you'll engage with Enterprise-level customers, providing guidance, support and analysis. You'll learn to become a Subject Matter Expert in at least two products within the Digital.ai Value Stream Platform. You will proactively help customers avoid potential issues and be responsible for providing clearly articulated solution to achieve the greatest customer satisfaction. Having a strong background from working within the software development cycle will be a major pre-requisite for this role. If you are a quick thinker and able to deliver a high-quality code, quickly, using the latest frameworks and technologies – you should join us! Requirements: Bachelor of Science degree in Information Technology, Computer Science or equivalent (preferred) 3+ years working in one or more of these roles: software development, technical support, with an ability to demonstrate strong technical aptitudes in one or more platform areas Strong problem-solving skills Excellent client-facing skills including the ability to work with customers in a manner that is professional, compassionate, and effective Excellent written and verbal communication skills Ability to synthesize and clearly communicate complex technical issues to technical and non-technical audiences at all levels, both internally and externally Good understanding of SaaS and Cloud operations Good understanding of the security processes, standards & issues involved in multi-tier, multi-tenant web applications for example SSO (Single Sign on Authentication), LDAP, etc Good understanding of the architectural principals of web-based platforms including SaaS, multi-tenancy, multi-tiered infrastructure, and application servers Good understanding of APIs (application programming interfaces), HTTP requests, Databases and Network infrastructure. Scripting language experience (Python or Perl, etc.) Good understanding of working on a UNIX (Linux, Solaris, etc.) and Windows operating systems and familiarity with applicable troubleshooting tools Enjoy working in a fast-paced, dynamic, multicultural, innovative, and international environment Ongoing learning attitude, has effective time management skills, shows attention to detail and can communication in English (oral and written) Must be able to work effectively with a globally distributed team using collaborative tools such as Zendesk, Atlassian, Microsoft Office 365 suite, and Slack Preferred DevOps Specific Skills Replicate/setup customer system architecture and integrations in: Azure, AWS, Docker, Hyper-V and/or VirtualBox Diagnose and troubleshoot network connectivity issues stemming from Windows and Linux protocols Implementing microservices and containers e.g., Kubernetes, Docker, OpenShift Building and implementing CI/CD Pipelines, experience working with repos, build automation tools, build orchestration and environment automation is very interesting E.g., Jenkins, GIT, SVN, CVS, Cloud-Formation, Terraform, Chef, Ansible, Puppet, Code Pipeline, & Azure Stack Digital.ai is firmly committed to merit-based hiring. We maintain compliance with US and International laws. We welcome everyone from all backgrounds, including age, race, color, gender, identity, gender expression, sex, pregnancy, national origin, ancestry, religion, physical or mental ability, medical condition, sexual orientation, marital status, citizenship status, protected military or veteran status, and believe that diversity is the foundation of innovation. For individuals with disabilities who would like to request an accommodation, please advise us within your job application or cover letter. FRAUD PREVENTION ALERT: please note that Digital.ai does not use third party recruiters. In our efforts to protect you against possible impersonation please check the email address or are contacted by an unfamiliar/third party requesting please reach out directly to Digital.ai. Show more Show less

Posted 1 day ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

POSITION: Sr. Devops Engineer Job Type: Work From Office (5 days) Location: Sector 16A, Film City, Noida / Mumbai Relevant Experience: Minimum 4+ year Salary: Competitive Education- B.Tech About the Company: Devnagri is a AI company dedicated to personalizing business communication and making it hyper-local to attract non-English speakers. We address the significant gap in internet content availability for most of the world’s population who do not speak English. For more detail - Visit www.devnagri.com We seek a highly skilled and experienced Senior DevOps Engineer to join our dynamic team. As a key member of our technology department, you will play a crucial role in designing and implementing scalable, efficient and robust infrastructure solutions with a strong focus on DevOps automation and best practices. Roles And Responsibilities Design, plan, and implement scalable, reliable, secure, and robust infrastructure architectures Manage and optimize cloud-based infrastructure components - Architect and implement containerization technologies, such as Docker, Kubernetes Implement the CI/CD pipelines to automate the build, test, and deployment processes Design and implement effective monitoring and logging solutions for applications and infrastructure. Establish metrics and alerts for proactive issue identification and resolution Work closely with cross-functional teams to troubleshoot and resolve issues. Implement and enforce security best practices across infrastructure components Establish and enforce configuration standards across various environments. Implement and manage infrastructure using Infrastructure as Code principles Leverage tools like Terraform for provisioning and managing resources. Stay abreast of industry trends and emerging technologies. Evaluate and recommend new tools and technologies to enhance infrastructure and operations Must Have Skills Cloud ( AWS & GCP ), Redis, MongoDB, MySQL, Docker, bash scripting, Jenkins, Prometheus, Grafana, ELK Stack, Apache, Linux Good To Have Skills Kubernetes, Collaboration and Communication, Problem Solving, IAM, WAF, SAST/DAST Interview Process Screening Round then Shortlisting >> 3 technical round >> 1 Managerial round >> HR Closure with your short success story into Devops and Tech Cheers For more details, visit our website- https://www.devnagri.com  Note for approver Skills:- DevOps, Linux/Unix, Apache, Amazon Web Services (AWS), Google Cloud Platform (GCP), prometheus, grafana, MongoDB, MySQL and CI/CD Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Telangana, India

On-site

Linkedin logo

At Bayer we’re visionaries, driven to solve the world’s toughest challenges and striving for a world where ,Health for all, Hunger for none’ is no longer a dream, but a real possibility. We’re doing it with energy, curiosity and sheer dedication, always learning from unique perspectives of those around us, expanding our thinking, growing our capabilities and redefining ‘impossible’. There are so many reasons to join us. If you’re hungry to build a varied and meaningful career in a community of brilliant and diverse minds to make a real difference, there’s only one choice. Manager Service Delivery POSITION PURPOSE: Working in a team of infrastructure specialists and engineers, an infrastructure engineer supports and maintains infrastructure solutions and services as directed and according to architectural guidelines. Individuals in this role will: Ensure services are delivered and used as required Work with and support third parties to provide infrastructure services YOUR TASKS AND RESPONSIBILITIES: Infrastructure Fundamentals Build, configure, administer, and support infrastructure technologies and solutions. These technologies and solutions can include computing, storage, networking, physical infrastructure, software, commercial-of-the-shelf (COTS), and open source packages and solutions. They can also include virtual and cloud computing such a Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS) Modern Standards Approach Build proficiency in the most important principles of a modern standards approach and awareness of how these standards apply to the work undertaken Apply these principles under guidance Ownership and Initiative Own an issue until a new owner has been found or the problem has been mitigated or resolved Problem Management Investigate problems in systems, processes, and services, with an understanding of the level of a problem (e.g., strategic, tactical, or operational) Contribute to the implementation of remedies and preventative measures Service Focus Take inputs from stakeholders and establish solutions that facilitate the achievement of business objectives Systems Design Develop proficiency in the scripting tools and software that are essential in the design, build, management, and operation of infrastructure solutions and services Use scripting to automate common infrastructure management and operations tasks Translate logical designs into physical designs Produce detailed designs Effectively document all work using required standards, methods, and tools, including prototyping tools where appropriate Design systems characterized by managed levels of risk, manageable business and technical complexity, and meaningful impact Work with well understood technology and identify patterns Systems Integration Build and test simple interfaces between systems Work on more complex integration as part of a wider team Technical Understanding Develop proficiency with core technical concepts related to the role and apply them with guidance Testing Correctly execute test scripts under supervision Effectively incorporate testing into ways of working and delivered solutions and services Troubleshooting and Problem Resolution Troubleshoot and identify problems across different technology capabilities Site Reliability Engineering (SRE) Apply SRE principles to enhance the reliability, scalability, and performance of critical IT infrastructure and services. Design and implement monitoring, alerting, and incident response strategies to ensure high availability and rapid recovery. Collaborate with cross-functional teams to automate operational tasks and improve system observability. Operational Technology (OT) Expertise & Security Integrate Operational Technology (OT) systems with enterprise IT environments, ensuring secure and efficient data flow between industrial and business systems. Support and maintain OT infrastructure including SCADA, PLCs, and industrial network protocols, with a focus on cybersecurity and compliance. Drive continuous improvement initiatives across IT and OT domains to support digital transformation and smart manufacturing goals. Identify information security risks and the controls that can be used to mitigate threats within solutions and services WHO YOU ARE: Required Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field. Proven experience in infrastructure engineering or a related role Experience with network administration, server management, and virtualization technologies Experience with cloud platforms (AWS, Azure, Google Cloud) and cloud infrastructure Proficiency in network protocols and technologies (TCP/IP, DNS, DHCP, VPN, etc.) Strong understanding of server and storage systems Experience with virtualization technologies Familiarity with scripting and automation Strong problem-solving skills and the ability to troubleshoot complex issues Ability to analyze system performance and identify optimization opportunities Capacity to understand and mitigate security risks Ability to work collaboratively in a team environment Preferred Advanced certifications (e.g., CCNP, Microsoft Certified, AWS Certified, Azure Solutions Expert) Experience with DevOps practices and tools Familiarity with Infrastructure as Code tools (Terraform, Ansible, Chef, Puppet) Understanding of compliance Ever feel burnt out by bureaucracy? Us too. That's why we're changing the way we work- for higher productivity, faster innovation, and better results. We call it Dynamic Shared Ownership (DSO). Learn more about what DSO will mean for you in your new role here https://www.bayer.com/enfstrategyfstrategy Bayer does not charge any fees whatsoever for recruitment process. Please do not entertain such demand for payment by any individuals / entities in connection with recruitment with any Bayer Group entity(ies) worldwide under any pretext. Please don’t rely upon any unsolicited email from email addresses not ending with domain name “bayer.com” or job advertisements referring you to an email address that does not end with “bayer.com”. For checking the authenticity of such emails or advertisement you may approach us at HROP_INDIA@BAYER.COM. YOUR APPLICATION Bayer is an equal opportunity employer that strongly values fairness and respect at work. We welcome applications from all individuals, regardless of race, religion, gender, age, physical characteristics, disability, sexual orientation etc. We are committed to treating all applicants fairly and avoiding discrimination. Location: India : Telangana : Shameerpet Division: Crop Science Reference Code: 847905 Contact Us 022-25311234 Show more Show less

Posted 1 day ago

Apply

12.0 - 17.0 years

14 - 19 Lacs

Mysuru

Work from Office

Naukri logo

The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus

Posted 1 day ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases

Posted 1 day ago

Apply

3.0 - 8.0 years

5 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : AWS Operations Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with team members to understand project needs, developing application features, and ensuring that the applications function seamlessly within the existing infrastructure. You will also engage in troubleshooting and optimizing applications to enhance performance and user experience, while adhering to best practices in software development. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the documentation of application processes and workflows.- Engage in continuous learning to stay updated with the latest technologies and methodologies.- Quickly identify, troubleshoot, and fix failures to minimize downtime.- To ensure the SLAs and OLAs are met within the timelines such that operation excellence is met. Professional & Technical Skills: - Must To Have Skills: Proficiency in AWS Operations.- Strong understanding of cloud architecture and services.- Experience with application development frameworks and tools.- Familiarity with DevOps practices and CI/CD pipelines.- Ability to troubleshoot and resolve application issues efficiently.- Strong understanding of cloud networking concepts including VPC design, subnets, routing, security groups, and implementing scalable solutions using AWS Elastic Load Balancer (ALB/NLB).- Practical experience in setting up and maintaining observability tools such as Prometheus, Grafana, CloudWatch, ELK stack for proactive system monitoring and alerting.- Hands-on expertise in containerizing applications using Docker and deploying/managing them in orchestrated environments such as Kubernetes or ECS.- Proven experience designing, deploying, and managing cloud infrastructure using Terraform, including writing reusable modules and managing state across environments.- Good problem solving skills - The ability to quickly identify, analyze, and resolve issues is vital.- Effective Communication - Strong communication skills are necessary for collaborating with cross- functional teams and documenting processes and changes.- Time Management - Efficiently managing time and prioritizing tasks is vital in operations support.- The candidate should have minimum 3 years of experience in AWS Operations. Additional Information:- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 day ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies