Home
Jobs

7730 Terraform Jobs - Page 34

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 10.0 years

20 - 27 Lacs

Hyderabad, Ahmedabad

Hybrid

Naukri logo

Job Title: DevOps Engineer Location : Hyderabad & Ahmedabad Employment Type: Full-Time Work Model - 3 Days from office Exp: 7+years Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Mandatory: • OS: Linux • Cloud: GCP (Compute Engine, Load Balancing, GKE, IAM) • CI/CD: Jenkins, GitHub Actions, Argo CD • Containers: Docker, Kubernetes • IaC: Terraform, Helm • Monitoring: Prometheus, Grafana, ELK • Security: Vault, Trivy, OWASP concepts Nice to Have : • Service Mesh (Istio), Pub/Sub, API Gateway Kong • Advanced scripting (Python, Bash, Node.js) • Skywalking, Rancher, Jira, Freshservice Scope: • Own CI/CD strategy and configuration • Implement DevSecOps practices • Drive automation-first culture Roles and Responsibilities: • Design and implement end-to-end CI/CD pipelines using Jenkins, GitHub Actions, and Argo CD for production-grade deployments. • Define branching strategies and workflow templates for development teams. • Automate infrastructure provisioning using Terraform, Helm, and Kubernetes manifests across multiple environments. • Implement and maintain container orchestration strategies on GKE, including Helm-based deployments. • Manage secrets lifecycle using Vault and integrate with CI/CD for secure deployments. • Integrate DevSecOps tools like Trivy, SonarQube, and JFrog into CI/CD workflows. • Collaborate with engineering leads to review deployment readiness and ensure quality gates are met. • Monitor infrastructure health and capacity planning using Prometheus, Grafana, and Datadog; implement alerting rules. • Implement auto-scaling, self-healing, and other resilience strategies in Kubernetes. • Drive process documentation, review peer automation scripts, and provide mentoring to junior DevOps engineers IF Interested Share Resume to : sowmya.v@acesoftlabs.com

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

THIS IS A LONG TERM CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER. Our Client is a Fortune 350 company that engages in the design, manufacturing, marketing, and service of semiconductor processing equipment. We are seeking an experienced High Performance Computing platform consultant to provide Support to India/Asia/EU region users and carry out platform enhancements and reliability improvement projects as aligned with HPC architect Minimum qualifications: Bachelor’s or Master’s degree in Computer Science or equivalent with 5+ years of experience in High Performance Computing technologies HPC Environment: Familiar with use of HPC – Ansys/Fluent over MPI, Helping users to tune their jobs in an HPC environment Linux administration Parallel file system (Eg. Gluster, Lustre, ZFS, Gluster, Luster, NFS, CIFS) MPI (OpenMPI, MPICH2, IntelMIP), Infiniband parallel computing Monitoring tools – Eg. Nagios Programming skills such as in Python would be nice to have, especially using MPI Experienced and hands on with Cloud technologies: Prefer using Azure and Terraform for VM creations and maintenance Effective communication skills (the resource would independently engage and address user requests and resolve incidents for global regions – Asia, EU included) Ability to work independently with minimal supervision Preferred Qualifications: Experience with ANSYS Products Show more Show less

Posted 3 days ago

Apply

12.0 - 18.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Title / Role: Technical Architect Key Skills: Ansible, Terraform, YAML, Network Operations Experience: 12- 18 Years Location: Greater Noida We at Coforge are seeking “Technical Architect” with the following skill set: Key Responsibilities: Build new DC environments using Terraform/Ansible/YAML codebase Build and maintain CI/CD pipelines using GitLab. Develop Ansible playbooks for POAP and network configuration tasks. Use YAML for structured configuration and parameter-driven deployment. Collaborate with internal teams to align delivery with standards and timelines. Suggest and implement improvements to tooling, workflows, and automation strategies. Support US and UK hours Required Skills: Strong hands-on experience with Terraform and Ansible. Proficiency with GitLab CI/CD and pipeline automation. Experience working with YAML in config-as-code environments. Good understanding of networking fundamentals (e.g., VLANs, routing, device provisioning). Python scripting skills Self-driven with a contractor mindset and excellent communication skills. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

This role is for one of Weekday's clients Salary range: Rs 1200000 - Rs 1600000 (ie INR 12-16 LPA) Min Experience: 4 years Location: Remote (India) JobType: full-time Requirements About the Role: We are looking for a skilled and experienced Java Developer with a strong background in migration projects and hands-on experience with Microsoft Azure to join our dynamic development team. The ideal candidate will play a critical role in modernizing legacy systems and ensuring seamless migration to cloud-native environments. If you're passionate about designing robust, scalable applications and navigating cloud-based transformations, we'd love to hear from you. As part of this role, you will be involved in analyzing legacy Java applications , developing strategies for their migration, implementing enhancements, and deploying them on Azure cloud infrastructure . You will collaborate closely with DevOps, QA, and solution architects to ensure high-performance, secure, and scalable systems. Key Responsibilities: Lead or contribute to the migration of legacy systems to modern Java-based architectures on Microsoft Azure. Analyze existing monolithic or on-prem systems to plan and execute cloud migration strategies. Design and develop Java applications, APIs, and services using Spring Boot and modern frameworks. Ensure smooth integration with Azure cloud components such as Azure App Services, Azure SQL, Azure Storage, etc. Optimize code for performance and scalability across distributed systems. Collaborate with solution architects and stakeholders to define migration goals, timelines, and deliverables. Implement automation tools and pipelines to streamline migration and deployment processes. Work closely with QA and DevOps teams to establish continuous integration and deployment pipelines. Troubleshoot issues in migration and production environments, and provide root cause analysis. Create documentation, including technical specifications, migration runbooks, and architectural diagrams. Required Skills and Qualifications: 4+ years of experience in Java development, with strong hands-on expertise in Java 8+, Spring/Spring Boot, and object-oriented programming principles. Proven experience in legacy system modernization and application migration projects. Strong knowledge of Azure services and cloud-native development, especially in deploying Java apps on Azure. Experience with RESTful API design, microservices, and containerized environments (Docker/Kubernetes preferred). Familiarity with databases such as Azure SQL, PostgreSQL, or MySQL, including data migration and schema evolution. Understanding of CI/CD pipelines, source control (Git), and build tools (Maven/Gradle). Strong analytical, problem-solving, and communication skills. Experience working in Agile or Scrum development environments. Preferred Skills (Good to Have): Knowledge of other cloud platforms (AWS, GCP) is a plus. Familiarity with DevOps tools such as Azure DevOps, Terraform, or Ansible. Experience in performance tuning, system monitoring, and cost optimization on Azure. Exposure to container orchestration tools like Kubernetes. Show more Show less

Posted 3 days ago

Apply

0.0 years

0 Lacs

Udaipur, Rajasthan

On-site

Indeed logo

Job Title: DevOps Intern Location: Udaipur, Rajasthan (Work from Office) Type: Internship (Full-time, In-office) Duration: 3–6 months (with potential for full-time conversion) Eligibility: Final-year students / Fresh graduates / Early career professionals-passout 2024-2025 About the Role: We are looking for a passionate and driven DevOps Intern to join our tech team in Udaipur. This is an exciting opportunity for individuals who have a foundational understanding of DevOps practices and hands-on exposure to AWS cloud services . If you are AWS Certified and eager to work in a real-world, collaborative environment, we’d love to hear from you! Key Responsibilities: Assist in designing, implementing, and maintaining CI/CD pipelines. Support the automation of infrastructure using tools like Terraform, CloudFormation, or similar. Monitor application performance and infrastructure health using tools like CloudWatch, Prometheus, or Grafana. Work closely with the development team to support code deployments and cloud environments. Participate in improving system reliability, scalability, and security on AWS. Document workflows, best practices, and setup procedures. Required Skills & Qualifications: Basic understanding of DevOps principles, CI/CD, and Infrastructure as Code (IaC). Exposure to AWS services such as EC2, S3, IAM, RDS, Lambda, etc. AWS Certified (Cloud Practitioner or higher – preferred). Familiarity with scripting languages like Bash, Python, or Shell. Comfortable working with Git, Docker, and Linux environments. Strong problem-solving skills and eagerness to learn. * Nice to Have: Hands-on experience with any DevOps tools like Jenkins, Ansible, Docker, Kubernetes, etc. Experience with version control systems like GitHub or GitLab. Exposure to Agile/Scrum environments. What We Offer: Exposure to live projects and real-world DevOps practices. Mentorship from experienced cloud and DevOps professionals. Certificate of Internship and potential Pre-Placement Offer (PPO). A dynamic and collaborative work culture in our Udaipur office. Job Types: Full-time, Permanent Pay: From ₹5,000.00 per month Benefits: Paid sick time Schedule: Day shift Monday to Friday Ability to commute/relocate: Udaipur, Rajasthan: Reliably commute or planning to relocate before starting work (Preferred) Application Question(s): Completed Certificate or Course in Devops ? yes /no Education: Bachelor's (Preferred) Work Location: In person

Posted 3 days ago

Apply

10.0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction At IBM, work is more than a job - it’s a calling: To build. To design. To code. To consult. To think along with clients and sell. To make markets. To invent. To collaborate. Not just to do something better, but to attempt things you’ve never thought possible. Are you ready to lead in this new era of technology and solve some of the world’s most challenging problems? If so, lets talk. Your Role And Responsibilities In this position you will be working with a web development team (both frontend and backend) to build innovative products from scratch.What you'll do: You Will Collaborate With Teams Like Design, Content, And Product Management To Plan, Build, And Test Innovative AI-infused Products Through Various Projects. You'll Write Well-tested Code For APIs And Tools For Python, Jenkins, Or Travis, Driving The Product Forward With Quality In Mind. This Is a Perfect Fit For You If You Are Looking To Have Large Impact And Innovate With The Latest Technologies Like LLMs And Generative AI! How We'll Help You Grow You will have access to all the technical training courses you need to become the expert you want to be You will learn directly from senior members/leaders in this field You will have the opportunity to work directly with multiple clients. Preferred Education Master's Degree Required Technical And Professional Expertise 10+ year experience in software development using functional and/or object-oriented programming languages such as Javascript or Python. Candidates in the following: Deep knowledge working with back-end development with in Javascript or Python, REST API and Datatbase technologies (DB2) and/or SQL databases. Experience working in cloud native application, working on docker, Kubernetes, OpenShift, Sound knowledge of databases, handling APIs, network requests, and general data manipulation. Understanding of large-scale application development and cloud architecture, with work experience. Experience working in cloud deployment, with building CI/CD pipelines such as Jenkins, Travis, etc.. Solid knowledge on Agile methodology and practices, such as SCRUM, Test Driven Development (TDD), etc. Experience with modern frontend JavaScript frameworks, such as React or equivalent. Experience building restful APIs and Web services in NodeJS, and similar technologies. Experience building and scaling web applications Preferred Technical And Professional Experience You can mentor and guide junior developers Experience in infrastructure as code languages such as Terraform and Ansible Experience with Continuous Integration / Continuous Delivery (CI/CD) methodologies Experience using container management technologies such as Kubernetes and Docker Experience with any Public Cloud Services Show more Show less

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Kanayannur, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Development Operations (India) Travel Required Up to 10% Clearance Required None Guidehouse SEAS Platform Engineering team is seeking Fresher DevOps Infrastructure Engineers. The ideal candidate should be interested in learning new open source infrastructure and DevOps tools. This role is to support and develop infrastructure for Guidehouse internal projects. This position will be part of the Solutions Engineering and Architecture team and will require working with users across Business segments. What You Will Do Collaboratively build and maintain Infrastructure for the Internal stakeholders and external clients (using Terraform) Understanding of Cloud concepts Support internal Dockerized platforms for the Internal analytics users (Posit Containerized products) Administer Linux Servers (RedHat and Ubuntu) Ready to work in the 2 PM to 11 PM shift. What You Will Need Candidates should be from computer background (B. Tech Computer Science or B.Sc CS, BCA etc.) Basic Git version control knowledge Linux training (e.g. RedHat or Ubuntu) Understanding of cloud computing basics Proficiency in at least one scripting language (e.g. Bash, Python) What Would Be Nice To Have AZ900 or AWS CCP Certification Experience with Docker containers RedHat Certified Engineer (RHCE) certification Infra, CI/CD, or config management experience (e.g. Ansible, GitHub Actions, Jenkins, Puppet) System Administrator level experience with Linux (Red Hat/CentOS or Debian/Ubuntu preferred) Knowledge in DevOps tools such as Terraform, Docker, Kubernetes. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 days ago

Apply

8.0 - 13.0 years

15 - 25 Lacs

Gurugram

Remote

Naukri logo

Minimum 6 years of hands-on experience deploying, enhancing, and troubleshooting foundational AWS Services (EC2, S3, RDS, VPC, CloudTrail, CloudFront, Lambda, EKS, ECS, etc.) • 3+ years of experience with serverless technologies, services, and container technologies (Docker, Kubernetes, etc.) o Manage Kubernetes charts using helm. o Managed production application deployments in Kubernetes cluster using KubeCTL. o Expertise in deploying distributed apps with containers (Docker) & orchestration (Kubernetes EKS,). o Experience in infrastructure-as-code tools for provisioning and managing Kubernetes infrastructure. o (Preferred) Certification in container orchestration systems and/or Certified Kubernetes Administrator. o Experience with Log Management and Analytics tools such as Splunk / ELK • 3+ years of experience with writing, debugging, and enhancing Terraform to write infrastructure as code to create scrips for EKS, EC2, S3, and other AWS services. o Expertise with working with Terraform Key features such as Infrastructure as code, execution plans, resource graphs, and change automation. o Implemented cluster services using Kubernetes and docker to manage local deployments in Kubernetes by building self-hosted Kubernetes clusters using Terraform. o Managed provisioning of AWS infrastructure using Terraform. o Develop and maintain infrastructure-as-code solutions using Terraform. • Ability to write scripts in JavaScript, Bash, Python, Typescript, or similar languages. • Able to work independently and as a team to architect and implement new solutions and technologies. • Very strong written and verbal communication skills; the ability to communicate verbally and in writing with all levels of employees and management, capable of successful formal and informal communication, speaks and writes clearly and understandably at the right level. • Ability to identify, evaluate, learn, and POC new technologies for implementation. • Experience in designing and implementing highly resilient AWS solutions.

Posted 3 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Position: Azure Devops Engineer Location: Pune Duration: Contract to Hire Job Description: Azure Devops Terraform Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

At NiCE, we don’t limit our challenges. We challenge our limits. Always. We’re ambitious. We’re game changers. And we play to win. We set the highest standards and execute beyond them. And if you’re like us, we can offer you the ultimate career opportunity that will light a fire within you. So, what’s the role all about? The Senior Specialist Technical Support Engineer role is to deliver technical support to end users about how to use and administer the NICE Service and Sales Performance Management, Contact Analytics and/or WFM software solutions efficiently and effectively in fulfilling business objectives. We are seeking a highly skilled and experienced Senior Specialist Technical Support Engineer to join our global support team. In this role, you will be responsible for diagnosing and resolving complex performance issues in large-scale SaaS applications hosted on AWS. You will work closely with engineering, DevOps, and customer success teams to ensure our customers receive world-class support and performance optimization. How will you make an impact? Serve as a subject matter expert in troubleshooting performance issues across distributed SaaS environments in AWS. Interfacing with various R&D groups, Customer Support teams, Business Partners and Customers Globally to address CSS Recording and Compliance application related product issues and resolve high-level issues. Analyze logs, metrics, and traces using tools like CloudWatch, X-Ray, Datadog, New Relic, or similar. Collaborate with development and operations teams to identify root causes and implement long-term solutions. Provide technical guidance and mentorship to junior support engineers. Act as an escalation point for critical customer issues, ensuring timely resolution and communication. Develop and maintain runbooks, knowledge base articles, and diagnostic tools to improve support efficiency. Participate in on-call rotations and incident response efforts. Have you got what it takes? 10+ years of experience in technical support, site reliability engineering, or performance engineering roles. Deep understanding of AWS services such as EC2, RDS, S3, Lambda, ELB, ECS/EKS, and CloudFormation. Proven experience troubleshooting performance issues in high-availability, multi-tenant SaaS environments. Strong knowledge of networking, load balancing, and distributed systems. Proficiency in scripting languages (e.g., Python, Bash) and familiarity with infrastructure-as-code tools (e.g., Terraform, CloudFormation). Excellent communication and customer-facing skills. Preferred Qualifications: AWS certifications (e.g., Solutions Architect, DevOps Engineer). Experience with observability platforms (e.g., Prometheus, Grafana, Splunk). Familiarity with CI/CD pipelines and DevOps practices. Experience working in ITIL or similar support frameworks. What’s in it for you? Join an ever-growing, market disrupting, global company where the teams – comprised of the best of the best – work in a fast-paced, collaborative, and creative environment! As the market leader, every day at NICE is a chance to learn and grow, and there are endless internal career opportunities across multiple roles, disciplines, domains, and locations. If you are passionate, innovative, and excited to constantly raise the bar, you may just be our next NICEr! Enjoy NICE-FLEX! At NICE, we work according to the NICE-FLEX hybrid model, which enables maximum flexibility: 2 days working from the office and 3 days of remote work, each week. Naturally, office days focus on face-to-face meetings, where teamwork and collaborative thinking generate innovation, new ideas, and a vibrant, interactive atmosphere. Requisition ID: 7554 Reporting into: Tech Manager Role Type: Individual Contributor About NiCE NICE Ltd. (NASDAQ: NICE) software products are used by 25,000+ global businesses, including 85 of the Fortune 100 corporations, to deliver extraordinary customer experiences, fight financial crime and ensure public safety. Every day, NiCE software manages more than 120 million customer interactions and monitors 3+ billion financial transactions. Known as an innovation powerhouse that excels in AI, cloud and digital, NiCE is consistently recognized as the market leader in its domains, with over 8,500 employees across 30+ countries. NiCE is proud to be an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, national origin, age, sex, marital status, ancestry, neurotype, physical or mental disability, veteran status, gender identity, sexual orientation or any other category protected by law. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are looking for a Senior .NET Core and Azure Cloud Developer with 3–5 years of experience to contribute to the design, development, and deployment of modern, cloud-native applications. The ideal candidate has solid hands-on experience building scalable backend systems using .NET Core and Microsoft Azure, and can work collaboratively in agile teams. Required Skills: · 3–5 years of professional development experience in .NET Core / C#. · Strong experience with Azure Cloud platform and core services (e.g., App Services, Azure Functions, Azure SQL, Cosmos DB, Storage Accounts). · Solid understanding of RESTful APIs, Web APIs, and microservices. · Experience with source control (Git) and CI/CD pipelines. · Familiarity with DevOps practices, infrastructure-as-code (Terraform), and deployment automation. · Basic knowledge of security practices in cloud applications (authentication, authorization, encryption). · Strong analytical and problem-solving skills. · Good communication and teamwork skills. Preferred Qualifications: · Exposure to front-end frameworks (e.g., Angular or React) is a plus. · Azure Certification (e.g., AZ-204 or AZ-900) is an advantage. · Familiarity with healthcare or finance domain projects is a bonus. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Responsibilities Lead the development of a modern, modular, and flexible restaurant technology platform. Lead the development and co-manage the roadmap for our HutBot platform, our in-restaurant management app. Assess, build and support restaurant ordering platforms, integrating POS with third-party apps and aggregators. Oversee the integration of Kiosks, Mobile Tablets, smart kitchen, delivery management systems, and BOH applications such as inventory, labor, learning management, and other employee-facing apps. Develop and maintain Enterprise architecture by building integrations between different platforms and apps. Minimum Requirements 10+ years of development experience managing large projects and teams with progressive career growth. Development experience in Typescript/NodeJS with React framework preferred, however we may consider strong candidates with proven experience in related technologies e.g. Python, C# etc. Familiarity with cloud technologies, with experience in AWS being a bonus, along with proficiency in infrastructure-as-code tools like Terraform. Strong understanding of modern database systems, including RDS (Postgres), NoSQL (DynamoDB, DocumentDB), and analytics tools like Snowflake, Domo (GDH), and Google Analytics. Experience in building and supporting restaurant ordering platforms, integration of POS with third-party apps and aggregators, Kiosks, Mobile Tablets, smart kitchen, delivery management systems, BOH applications such as inventory, labor, learning management, and other employee-facing apps. Experience in managing and building Enterprise architecture by building integrations between different platforms and apps while managing long-term strategic focus and roadmaps. Experience in managing large teams across multiple time zones. Preferred Requirements Development experience in Typescript/NodeJS with React framework preferred, however we may consider strong candidates with proven experience in related technologies e.g. Python, C# etc. Familiarity with cloud technologies, with experience in AWS being a bonus, along with proficiency in infrastructure-as-code tools like Terraform. Strong understanding of modern database systems, including RDS (Postgres), NoSQL (DynamoDB, DocumentDB), and analytics tools like Snowflake, Domo (GDH), and Google Analytics. The Yum! Brands story is simple. We have the four distinctive, relevant and easy global brands – KFC, Pizza Hut, Taco Bell and The Habit Burger Grill -- born from the hopes and dreams, ambitions and grit of passionate entrepreneurs. And we want more of this to create our future! As the world’s largest restaurant company we have a clear and compelling mission: to build the world’s most love, trusted and fastest-growing restaurant brands. The key and not-so-secret ingredient in our recipe for growth is our unrivaled talent and culture, which fuels our results. We’re looking for talented, motivated, visionary and team-oriented leaders to join us as we elevate and personalize the customer experience across our 48,000 restaurants, operating in 145 countries and territories around the world! We put pizza, chicken and tacos in the hands of customers through customized ordering, unique delivery approaches, app experiences, and click and collect services and consumer data analytics creating unique customer dining experiences – and we are only getting started. Employees may work for a single brand and potentially grow to support all company-owned brands depending on their role. Regardless of where they work, as a company opening an average of 8 restaurants a day worldwide, the growth opportunities are endless. Taco Bell has been named of the 10 Most Innovative Companies in the World by Fast Company; Pizza Hut delivers more pizzas than any other pizza company in the world and KFC’s still use its 75-year-old finger lickin’ good recipe including secret herbs and spices to hand-bread its chicken every day. Yum! and its brands have offices in Chicago, IL, Louisville KY, Irvine, CA, Plano, TX and other markets around the world. We don’t just say we are a great place to work – our commitments to the world and our employees show it. Yum! has been named to the Dow Jones Sustainability North America Index and ranked among the top 100 Best Corporate Citizens by Corporate Responsibility Magazine in addition to being named to the Bloomberg Gender-Equality Index. Our employees work in an environment where the value of “believe in all people” is lived every day, enjoying benefits including but not limited to: 4 weeks’ vacation PLUS holidays, sick leave and 2 paid days to volunteer at the cause of their choice and a dollar-for-dollar matching gift program; generous parental leave; competitive benefits including medical, dental, vision and life insurance as well as a 6% 401k match – all encompassed in Yum!’s world-famous recognition culture. Show more Show less

Posted 3 days ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Requirements Working experience as Lead Python developer Back-end and API development experience in Python Front-end experience with React.js Experience with AWS technologies (RDS, Lambda, DynamoDB) Solid experience with SQL and NoSQL databases Familiarity with Agile methodologies (Scrum) Experience working on Linux-based platforms Problem solving skills Solid communication skills Experience communicating with clients stakeholders is essential Nice to have Experience with Node.js or other programming languages Experience with Terraform

Posted 3 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description: · As a Software Engineer on our team, you will be instrumental in developing and maintaining key features for our applications. · You'll be involved in all stages of the software development lifecycle, from design and implementation to testing and deployment. Responsibilities: · Develop and Maintain Application Features: Implement new features and maintain existing functionality for both the front-end and back-end of our applications. · Front-End Development: Build user interfaces using React or Angular, ensuring a seamless and engaging user experience. · Back-End Development: Design, develop, and maintain robust and scalable back-end services using [Backend Tech - e.g., Node.js, Python/Django, Java/Spring, React]. · Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP), leveraging services like [GCP Tech - e.g., App Engine, Cloud Functions, Kubernetes]. · Performance Optimization: Identify and address performance bottlenecks to ensure optimal speed and scalability of our applications. · Code Reviews: Participate in code reviews to maintain code quality and share knowledge with team members. · Unit Testing: Write and maintain unit tests to ensure the reliability and correctness of our code. · SDLC Participation: Actively participate in all phases of the software development lifecycle, including requirements gathering, design, implementation, testing, and deployment. · Collaboration: Work closely with product managers, designers, and other engineers to deliver high-quality software that meets user needs. Skills Required: · Python, GCP, Angular, DevOps Skills Preferred: · API, Tekton, TERRAFORM Experience Required: · 5+ years of professional software development experience Education Required: · Bachelor's Degree Additional Information : · Develop and Maintain Application Features: Implement new features and maintain existing functionality for both the front-end and back-end of our applications. · Front-End Development: Build user interfaces using React or Angular, ensuring a seamless and engaging user experience. · Back-End Development: Design, develop, and maintain robust and scalable back-end services using [Backend Tech - e.g., Node.js, Python/Django, Java/Spring, React]. · Cloud Deployment: Deploy and manage applications on Google Cloud Platform (GCP), leveraging services like [GCP Tech - e.g., App Engine, Cloud Functions, Kubernetes]. · Performance Optimization: Identify and address performance bottlenecks to ensure optimal speed and scalability of our applications. · Code Reviews: Participate in code reviews to maintain code quality and share knowledge with team members. · Unit Testing: Write and maintain unit tests to ensure the reliability and correctness of our code. · SDLC Participation: Actively participate in all phases of the software development lifecycle, including requirements gathering, design, implementation, testing, and deployment. · Collaboration: Work closely with product managers, designers, and other engineers to deliver high-quality software that meets user needs. Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Dear Candidate Greetings from TATA Consultancy Services Job Openings at TCS Skill : GCP Devops Engineer Exp range : 6 yrs to 12 yrs Interview date : 19th June ‘25 Role : Permanent Role Job location : Hyderabad/ Chennai Current location : Anywhere In India Interview mode : MS Teams Pls find the Job Description below. Experience in design, develop and deploy GCP resources as Infra-as-code in Google Cloud Platform Strong Knowledge in Automation frameworks, CI/CD process and tools (Jenkins, GitHub, Sonar Cube etc.) is a must Strong Knowledge in Terraform and Sentinel is a plus Familiarity with Agile Practices and Frameworks Good knowledge on Kubernetes Good knowledge on Java microservices with GCP exposure Good to have: Python, GitHUb If you are Interested in the above opportunity kindly share your updated resume to r.shruthi13@tcs.com immediately with the details below (Mandatory) Name: Contact No. Email id: Total exp : Fulltime highest qualification (Year of completion with percentage scored) : Highest Qualifiacation university Name : Current organization details(Payroll company) : Current CTC : Expected CTC : Notice period : Show more Show less

Posted 3 days ago

Apply

5.0 - 9.0 years

7 - 10 Lacs

Hyderabad, Coimbatore

Work from Office

Naukri logo

Expertise in Linux & Windows Expertise in AWS Docker & Kubernetes CI/CD Patterns(Preferably Azure Devops or GitLab - Any one tool is mandatory) Expertise in Monitoring Tools Like Prometheus or Grafana Infrastructure as a Code Required Candidate profile Minimum 5+years of experience in DevOps Strong analytical and problem-solving skills to identify and resolve complex issues Excellent communication skills to effectively collaborate with other teams

Posted 3 days ago

Apply

3.0 - 6.0 years

0 Lacs

Goregaon, Maharashtra, India

On-site

Linkedin logo

Experience: 3 to 6 years Location: Mumbai (Onsite) Openings: 2 About the Role: We are looking for hands-on and automation-driven Associate Cloud Engineers to join our DevOps team at Gray Matrix. You will be responsible for managing cloud infrastructure, CI/CD pipelines, containerized deployments, and ensuring platform stability and scalability across environments. Key Responsibilities: Design, build, and maintain secure and scalable infrastructure on AWS, Azure, or GCP. Set up and manage CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Manage Dockerized environments, ECS, EKS, or Kubernetes clusters for microservice-based deployments. Monitor and troubleshoot production and staging environments, ensuring uptime and performance. Work closely with developers to streamline release cycles and automate testing, deployments, and rollback procedures. Maintain infrastructure as code using Terraform or CloudFormation. What We’re Looking For: 3–6 years of experience in DevOps or cloud engineering roles. Strong knowledge of Linux system administration, networking, and cloud infrastructure (preferably AWS). Experience with Docker, Kubernetes, Nginx, and monitoring tools like Prometheus, Grafana, or CloudWatch. Familiarity with Git, scripting (Shell/Python), and secrets management tools. Ability to debug infrastructure issues, logs, and deployments across cloud-native stacks. Bonus Points: Certification in AWS/GCP/Azure DevOps or SysOps. Exposure to security, cost optimization, and autoscaling setups. Work Mode: Onsite – Mumbai Reporting To: Senior Cloud Engineer / Lead Cloud Engineer Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

Raipur, Chhattisgarh, India

On-site

Linkedin logo

Role Summary We are seeking a highly motivated and skilled Data Engineer to join our data and analytics team. This role is ideal for someone with strong experience in building scalable data pipelines, working with modern lakehouse architectures, and deploying data solutions on Microsoft Azure. You’ll be instrumental in developing, orchestrating, and maintaining our real-time and batch data infrastructure using tools like Apache Spark, Apache Kafka, Apache Airflow, Azure Data Services, and modern DevOps practices. Key Responsibilities Design and implement ETL/ELT data pipelines for structured and unstructured data using Azure Data Factory, Databricks, or Apache Spark. Work with Azure Blob Storage, Data Lake, and Synapse Analytics to build scalable data lakes and warehouses. Develop real-time data ingestion pipelines using Apache Kafka, Apache Flink, or Apache Beam. Build and schedule jobs using orchestration tools like Apache Airflow or Dagster. Perform data modeling using Kimball methodology for building dimensional models in Snowflake or other data warehouses. Implement data versioning and transformation using DBT and Apache Iceberg or Delta Lake. Manage data cataloging and lineage using tools like Marquez or Collibra. Collaborate with DevOps teams to containerize solutions using Docker, manage infrastructure with Terraform, and deploy on Kubernetes. Setup and maintain monitoring and alerting systems using Prometheus and Grafana for performance and reliability. Required Skills & Qualifications Programming & Scripting: Proficiency in Python, with strong knowledge of OOP and data structures & algorithms. Comfortable working in Linux environments for development and deployment. Database Technologies: Strong command over SQL and understanding of relational (DBMS) and NoSQL databases. Big Data & Real-Time Processing: Solid experience with Apache Spark (PySpark/Scala). Familiarity with real-time processing tools like Kafka, Flink, or Beam. Orchestration & Scheduling: Hands-on experience with Airflow, Dagster, or similar orchestration tools. Cloud Platform: Deep experience with Microsoft Azure, especially Azure Data Factory, Blob Storage, Synapse, Azure Functions, etc. AZ-900 or other Azure certifications are a plus. Lakehouse & Warehousing Knowledge of dimensional modeling, Snowflake, Apache Iceberg, and Delta Lake. Understanding of modern Lakehouse architecture and related best practices. Data Cataloging & Governance Familiarity with Marquez, Collibra, or other cataloging tools. DevOps & CI/CD Experience with Terraform, Docker, Kubernetes, and Jenkins or equivalent CI/CD tools. Monitoring & Logging Proficiency in setting up dashboards and alerts with Prometheus and Grafana. Note: - Immediate joiner will be preferred. Show more Show less

Posted 3 days ago

Apply

8.0 - 10.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Location : Bengaluru / Delhi Reports To : Chief Revenue Officer Position Overview: We are looking for a highly motivated Pre-Sales Specialist to join our team at Neysa, a rapidly growing AI Cloud Platform company that's making waves in the industry. The role is a customer-facing technical position that will work closely with sales teams to understand client requirements, design tailored solutions and drive technical engagements. You will be responsible for presenting complex technology solutions to customers, creating compelling demonstrations, and assisting in the successful conversion of sales opportunities. Key Responsibilities: Solution Design & Customization : Work closely with customers to understand their business challenges and technical requirements. Design and propose customized solutions leveraging Cloud, Network, AI, and Machine Learning technologies that best fit their needs. Sales Support & Enablement : Collaborate with the sales team to provide technical support during the sales process, including delivering presentations, conducting technical demonstrations, and assisting in the development of proposals and RFP responses. Customer Engagement : Engage with prospects and customers throughout the sales cycle, providing technical expertise and acting as the technical liaison between the customer and the company. Conduct deep-dive discussions and workshops to uncover technical requirements and offer viable solutions. Proof of Concept (PoC) : Lead the technical aspects of PoC engagements, demonstrating the capabilities and benefits of the proposed solutions. Collaborate with the customer to validate the solution, ensuring it aligns with their expectations. Product Demos & Presentations : Deliver compelling product demos and presentations tailored to the customer’s business and technical needs, helping organizations unlock innovation and growth through AI. Simplify complex technical concepts to ensure that both business and technical stakeholders understand the value proposition. Proposal Development & RFPs : Assist in crafting technical proposals, responding to RFPs (Request for Proposals), and providing technical content that highlights the company’s offerings, differentiators, and technical value. Technical Workshops & Trainings : Facilitate customer workshops and training sessions to enable customers to understand the architecture, functionality, and capabilities of the solutions offered. Collaboration with Product & Engineering Teams : Provide feedback to product management and engineering teams based on customer interactions and market demands. Help shape future product offerings and improvements. Market & Competitive Analysis : Stay up-to-date on industry trends, new technologies, and competitor offerings in AI and Machine Learning, Cloud and Networking, to provide strategic insights to sales and product teams. Documentation & Reporting : Create and maintain technical documentation, including solution designs, architecture diagrams, and deployment plans. Track and report on pre-sales activities, including customer interactions, pipeline status, and PoC results. Key Skills and Qualifications: Experience : Minimum of 8-10 years of experience in a pre-sales or technical sales role, with a focus on AI, Cloud and Networking solutions. Technical Expertise : Solid understanding of Cloud computing, Data Center infrastructure, Networking (SDN, SD-WAN, VPNs), and emerging AI/ML technologies. Experience with architecture design and solutioning across these domains, especially in hybrid cloud and multi-cloud environments. Familiarity with tools such as Kubernetes, Docker, TensorFlow, Apache Hadoop, and machine learning frameworks. Sales Collaboration : Ability to work alongside sales teams, providing the technical expertise needed to close complex deals. Experience in delivering customer-focused presentations and demos. Presentation & Communication Skills : Exceptional ability to articulate technical solutions to both technical and non-technical stakeholders. Strong verbal and written communication skills. Customer-Focused Mindset : Excellent customer service skills with a consultative approach to solving customer problems. Ability to understand business challenges and align technical solutions accordingly. Having the mindset to build rapport with customers and become their trusted advisor. Problem-Solving & Creativity : Strong analytical and problem-solving skills, with the ability to design creative, practical solutions that align with customer needs. Certifications : Degree in Computer Science, Engineering, or a related field Cloud and AI / ML certifications are highly desirable Team Player : Ability to work collaboratively with cross-functional teams including product, engineering, and delivery teams. Preferred Qualifications: Industry Experience : Experience in delivering solutions in industries such as finance, healthcare, or telecommunications is a plus. Technical Expertise in AI/ML : A deeper understanding of AI/ML applications, including natural language processing (NLP), computer vision, predictive analytics, or data science use cases. Experience with DevOps Tools : Familiarity with CI/CD pipelines, infrastructure as code (IaC), and automation tools like Terraform, Ansible, or Jenkins. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Summary We are seeking an experienced Data Architect with expertise in Snowflake, dbt, Apache Airflow, and AWS to design, implement, and optimize scalable data solutions. The ideal candidate will play a critical role in defining data architecture, governance, and best practices while collaborating with cross-functional teams to drive data-driven decision-making. Key Responsibilities Data Architecture & Strategy: Design and implement scalable, high-performance cloud-based data architectures on AWS. Define data modelling standards for structured and semi-structured data in Snowflake. Establish data governance, security, and compliance best practices. Data Warehousing & ETL/ELT Pipelines: Develop, maintain, and optimize Snowflake-based data warehouses. Implement dbt (Data Build Tool) for data transformation and modelling. Design and schedule data pipelines using Apache Airflow for orchestration. Cloud & Infrastructure Management: Architect and optimize data pipelines using AWS services like S3, Glue, Lambda, and Redshift. Ensure cost-effective, highly available, and scalable cloud data solutions. Collaboration & Leadership: Work closely with data engineers, analysts, and business stakeholders to align data solutions with business goals. Provide technical guidance and mentoring to the data engineering team. Performance Optimization & Monitoring: Optimize query performance and data processing within Snowflake. Implement logging, monitoring, and alerting for pipeline reliability. Required Skills & Qualifications 10+ years of experience in data architecture, engineering, or related roles. Strong expertise in Snowflake, including data modeling, performance tuning, and security best practices. Hands-on experience with dbt for data transformations and modeling. Proficiency in Apache Airflow for workflow orchestration. Strong knowledge of AWS services (S3, Glue, Lambda, Redshift, IAM, EC2, etc.). Experience with SQL, Python, or Spark for data processing. Familiarity with CI/CD pipelines, Infrastructure-as-Code (Terraform/CloudFormation) is a plus. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, etc.). Preferred Qualifications Certifications: AWS Certified Data Analytics – Specialty, Snowflake SnowPro Certification, or dbt Certification. Experience with streaming technologies (Kafka, Kinesis) is a plus. Knowledge of modern data stack tools (Looker, Power BI, etc.). Experience in OTT streaming could be added advantage. Show more Show less

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

This role is for one of the Weekday's clients Min Experience: 5 years Location: Hyderabad JobType: full-time We are seeking a highly skilled and motivated Azure DevOps Engineer with 5 to 8 years of hands-on experience to join our growing engineering team. In this role, you will be responsible for designing, implementing, and maintaining scalable and reliable DevOps solutions within the Microsoft Azure ecosystem. You will play a key role in enabling seamless development, testing, and deployment pipelines that empower our development teams to deliver high-quality software efficiently. This role demands deep expertise in Azure DevOps , GitHub , Infrastructure as Code (IaC) , CI/CD pipelines , Docker , and Kubernetes . You will work closely with software engineers, architects, and product managers to streamline development workflows, ensure system reliability, and uphold industry-leading DevOps practices. Requirements Key Responsibilities: DevOps Implementation: Design, develop, and maintain end-to-end DevOps solutions within the Azure DevOps ecosystem, ensuring seamless integration with existing tools and environments. CI/CD Pipeline Management: Build and manage scalable CI/CD pipelines using Azure DevOps and GitHub Actions to enable rapid and secure delivery of applications across multiple environments. Infrastructure as Code (IaC): Implement and maintain infrastructure using tools like ARM templates, Terraform, or Bicep to ensure repeatability and consistency across environments. Containerization & Orchestration: Develop and manage Docker containers and orchestrate them using Kubernetes in Azure Kubernetes Service (AKS) to support microservices architecture. Source Control & Repository Management: Oversee and manage Git repositories, branching strategies, and access controls on GitHub and Azure Repos. Monitoring & Security: Implement monitoring, logging, and security best practices across CI/CD pipelines and infrastructure to ensure observability and compliance. Collaboration & Support: Collaborate with development and QA teams to troubleshoot build and deployment issues, provide DevOps expertise, and ensure high system availability. Required Skills & Qualifications: 5-8 years of experience in DevOps, with at least 3 years focused on Azure DevOps and Azure cloud infrastructure. Strong proficiency in the Azure DevOps ecosystem, including Boards, Repos, Pipelines, Test Plans, and Artifacts. Solid experience with GitHub, Git workflows, and GitHub Actions. Deep understanding of CI/CD pipeline design, automation, and implementation. Proven experience in Infrastructure as Code (IaC) using tools such as Terraform, ARM templates, or Bicep. Strong knowledge of Docker and container orchestration tools like Kubernetes (preferably in Azure Kubernetes Service). Familiarity with agile methodologies and DevSecOps principles. Excellent problem-solving, communication, and collaboration skills. Nice to Have: Azure certifications (e.g., AZ-400, AZ-104). Experience with monitoring tools such as Azure Monitor, Prometheus, or Grafana. Knowledge of scripting languages (PowerShell, Bash, or Python) Show more Show less

Posted 3 days ago

Apply

7.0 years

40 Lacs

India

Remote

Linkedin logo

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 3 days ago

Apply

0 years

0 Lacs

India

On-site

Linkedin logo

Back-End Engineer – Go + PostgreSQL (Contract) Core Skills (“Must-Have”) Golang expertise Idiomatic Go 1.21+, goroutines / channels, std-lib HTTP & sql packages, context-aware code Relational-data mastery Hands-on with PostgreSQL 13+ — schema design, indexes, migrations (Flyway, Goose, or pg-migrate) Comfortable writing performant SQL and debugging query plans API craftsmanship Design and version REST/JSON (or gRPC) endpoints; enforce contract tests and backward compatibility Quality & Dev-Ops hygiene Unit + integration tests (Go test / Testcontainers), GitHub Actions or similar CI, Docker-ised local setup Observability hooks (Prometheus metrics, structured logging, Sentry) Collaboration fluency Pair daily with React front-end & designers; discuss payloads, edge cases, and rollout plans up front Day-to-Day Responsibilities Ship incremental data-model and API updates — e.g., add a column with default values, write safe up/down migrations, expose the field in existing endpoints, and coordinate UI changes Design small new features such as derived “metric-health” tables or aggregated views that power dashboards Guard performance & reliability — run load tests, add indexes, set query timeouts, and handle graceful fallbacks behind feature flags Keep codebase clean — review PRs, refactor shared helpers, and prune dead code as product evolves Nice-to-Have Extras Production experience with a feature-flag SDK (LaunchDarkly, Split, etc.) to stage database changes safely Familiarity with event streaming (Kafka / NATS) or background job runners (Go workers, Sidekiq-like queues) Exposure to container orchestration (Kubernetes, ECS) and infrastructure-as-code (Terraform, Pulumi) Sample Mini-Projects You Might Tackle Scenario: Add property to existing entity Write migration to add source_type column to metrics, backfill with default, update GET/POST /metrics handlers & swagger docs, unit-test both happy & error paths Scenario: New aggregated view Create new table metric_health that rolls up pass/fail counts per metric, expose /metrics/{id}/health endpoint returning red/amber/green status with pagination, instrument with Prometheus counters Show more Show less

Posted 3 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies