Home
Jobs

7730 Terraform Jobs - Page 32

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

5 - 7 Lacs

Hyderābād

On-site

GlassDoor logo

CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As an AWS Infrastructure Engineer, you play a crucial role in building, and maintaining a cloud infrastructure on Amazon Web Services (AWS). You will also be responsible for the ownership of tasks assigned through SNOW, Dashboard, Order forms etc. The work you will do includes: Build and operate the Cloud infrastructure on AWS Continuously monitoring the health and performance of the infrastructure and resolving any issues. Using tools like CloudFormation, Terraform, or Ansible to automate infrastructure provisioning and configuration. Administer the EC2 instance’s OS such as Windows and Linux Working with other teams to deploy secure, scalable, and cost-effective cloud solutions based on AWS services. Implement monitoring and logging for Infra and Apps Keeping the infrastructure up-to-date with the latest security patches and software versions. Collaborate with development, operations and Security teams to establish best practices for software development, build, deployment, and infrastructure management Tasks related to IAM, Monitoring, Backup and Vulnerability Remediation Participating in performance testing and capacity planning activities Documentation, Weekly/Bi-Weekly Deck preparation, KB article update Handover and On call support during weekends on rotational basis QUALIFICATIONS Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in AWS Cloud, Cloud Formation template, Windows/Linux administration Understanding of 2 tier, 3 tier or multi-tier architecture Experience on IaaS/PaaS/SaaS Understanding of Disaster recovery Networking and security expertise Knowledge on PowerShell, Shell and Python Associate/Professional level certification on AWS solution architecture ITIL Foundational certification Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Understanding of container technologies such as Docker, Kubernetes and OpenShift. Understanding of Application and other infrastructure monitoring tools Understanding of end-to-end infrastructure landscape Experience on virtualization platform Knowledge on Chef, Puppet, Bamboo, Concourse etc Knowledge on Microservices, DataLake, Machine learning etc Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with AWS, System administration, IaC etc Location: Hyderabad/ Pune The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302308

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderābād

On-site

GlassDoor logo

Be a part of a team that’s ensuring Dell Technologies' product integrity and customer satisfaction. Our IT Software Engineer team turns business requirements into technology solutions by designing, coding and testing/debugging applications, as well as documenting procedures for use and constantly seeking quality improvements. Join us to do the best work of your career and make a profound social impact as a Software Engineer 2-IT on our Software Engineer-IT Team in Hyderabad What you’ll achieve As an IT Software Engineer, you will deliver products and improvements for a changing world. Working at the cutting edge, you will craft and develop software for platforms, peripherals, applications and diagnostics — all with the most sophisticated technologies, tools, software engineering methodologies and partnerships. You will: Work with complicated business applications across functional areas Take design from concept to production, which may include design reviews, feature implementation, debugging, testing, issue resolution and factory support Manage design and code reviews with a focus on the best user experience, performance, scalability and future expansion Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Strong experience in scripting languages like Bash, Python, or Groovy for automation. Working experience of Git-Based workflows and understanding of CI/CD pipelines, runners, and YAML configuration. Hands-on experience with Docker/Kaniko , Kubernetes and microservice-based deployments. Strong knowledge with GitLab, Ansible, Terraform, and monitoring tools like Prometheus and Grafana. Experience troubleshooting deployment or integration issues and optimize CI/CD pipelines efficiently Desirable Requirements 3 to 5 years of experience in software/coding/IT software Who we are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 30-July-25 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Job ID: R270155

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Job requisition ID :: 84234 Date: Jun 15, 2025 Location: Delhi Designation: Senior Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company: Keka HR Website: Visit Website Business Type: Startup Company Type: Product Business Model: B2B Funding Stage: Series A Industry: HRMS Salary Range: ₹ 10-25 Lacs PA Job Description About the Role We are looking for a highly skilled Site Reliability Engineer (SRE) to lead the implementation and management of our observability stack across Azure-hosted infrastructure and .NET Core applications. This role will focus on configuring and managing Open Telemetry, Prometheus, Loki, and Tempo, along with setting up robust alerting systems across all services — including Azure infrastructure and MSSQL databases. You will work closely with developers, DevOps, and infrastructure teams to ensure the performance, reliability, and visibility of our .NET Core applications and cloud services. Key Responsibilities Observability Platform Implementation: Design and maintain distributed tracing, metrics, and logging using OpenTelemetry, Prometheus, Loki, and Tempo. Ensure complete instrumentation of .NET Core applications for end-to-end visibility. Implement telemetry pipelines for application logs, performance metrics, and traces. Monitoring & Alerting Develop and manage SLIs, SLOs, and error budgets. Create actionable, noise-free alerts using Prometheus Alertmanager and Azure Monitor. Monitor key infrastructure components, applications, and databases with a focus on reliability and performance. Azure & Infrastructure Integration: Integrate Azure services (App Services, VMs, Storage, etc.) with the observability stack. Configure monitoring for MSSQL databases, including performance tuning metrics and health indicators. Use Azure Monitor, Log Analytics, and custom exporters where necessary. Automation & DevOps Automate observability configurations using Terraform, PowerShell, or other IaC tools. Integrate telemetry validation and health checks into CI/CD pipelines. Maintain observability as code for repeatable deployments and easy scaling. Resilience & Reliability Engineering: Conduct capacity planning to anticipate scaling needs based on usage patterns and growth. Define and implement disaster recovery strategies for critical Azure-hosted services and databases. Perform load and stress testing to identify performance bottlenecks and validate infrastructure limits. Support release engineering by integrating observability checks and rollback strategies in CI/CD pipelines. Apply chaos engineering practices in lower environments to uncover potential reliability risks proactively. Collaboration & Documentation: Partner with engineering teams to promote observability best practices in .NET Core development. Create dashboards (Grafana preferred) and runbooks for system insights and incident response. Document monitoring standards, troubleshooting guides, and onboarding materials. Required Skills And Experience 4+ years of experience in SRE, DevOps, or infrastructure-focused roles. Deep experience with .NET Core application observability using OpenTelemetry. Proficiency with Prometheus, Loki, Tempo, and related observability tools. Strong background in Azure infrastructure monitoring, including App Services and VMs. Hands-on experience monitoring MSSQL databases (deadlocks, query performance, etc.). Familiarity with Infrastructure as Code (Terraform, Bicep) and scripting (PowerShell, Bash). Experience building and tuning alerts, dashboards, and metrics for production systems. Preferred Qualifications Azure certifications (e.g., AZ-104, AZ-400). Experience with Grafana, Azure Monitor, and Log Analytics integration. Familiarity with distributed systems and microservice architectures. Prior experience in high-availability, regulated, or customer-facing environments. Show more Show less

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

GlassDoor logo

Job requisition ID :: 84245 Date: Jun 15, 2025 Location: Delhi Designation: Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 3 days ago

Apply

3.0 years

8 - 15 Lacs

Mohali

On-site

GlassDoor logo

Job Information Date Opened 06/16/2025 Job Type Full time Industry IT Services Work Experience 3+ Years Salary 8-15 LPA City Mohali State/Province Punjab Country India Zip/Postal Code 160071 Job Description ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. Building Agentic Systems for AI Agents with https://www.akira.ai Vision AI Platform with https://www.xenonstack.ai Inference AI Infrastructure for Agentic Systems - https://www.nexastack.ai THE OPPORTUNITY We are seeking an experienced DevOps Engineer with 3-6 years of experience in implementing and reviewing CI/CD pipelines, cloud deployments, and automation tasks. If you have a strong foundation in cloud technologies, containerization, and DevOps best practices, we would love to have you on our team. JOB ROLES AND RESPONSIBILITIES Develop and maintain CI/CD pipelines to automate the deployment and testing of applications across AWS and Private Cloud. Assist in deploying applications and services to cloud environments while ensuring optimal configuration and security practices. Implement monitoring solutions to ensure infrastructure health and performance; troubleshoot issues as they arise in production environments. Automate repetitive tasks and manage cloud infrastructure using tools like Terraform, CloudFormation, and scripting languages (Python, Bash). Work closely with software engineers to integrate deployment pipelines with application codebases and streamline workflows. Ensure efficient resource management in the cloud, monitor costs, and optimize usage to reduce waste. Create detailed documentation for DevOps processes, deployment procedures, and troubleshooting steps to ensure clarity and consistency across the team. Requirements SKILLS REQUIREMENTS 2-4 years of experience in DevOps or cloud infrastructure engineering. Proficiency in cloud platforms on AWS, and hands-on experience with their core services (EC2, S3, RDS, Lambda, etc.). Advanced knowledge of CI/CD tools such as Jenkins, GitLab CI, or CircleCI, and hands-on experience implementing and managing CI/CD pipelines. Experience with containerization technologies like Docker and Kubernetes for deploying applications at scale. Strong knowledge of Infrastructure-as-Code (IaC) using tools like Terraform or CloudFormation. Proficient in scripting languages such as Python and Bash for automating infrastructure tasks and deployments. Understanding of monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch to ensure system performance and uptime. Strong understanding of Linux-based operating systems and cloud-based infrastructure management. Bachelor’s degree in Computer Science, Information Technology, or related field. Benefits CAREER GROWTH AND BENEFITS Continuous Learning & Growth Access to training, certifications, and hands-on sessions to enhance your DevOps and cloud engineering skills. Opportunities for career advancement and leadership roles in DevOps engineering. Recognition & Rewards Performance-based incentives and regular feedback to help you grow in your career. Special recognition for contributions towards streamlining and improving DevOps practices. Work Benefits & Well-Being Comprehensive health insurance and wellness programs to ensure a healthy work-life balance. Cab facilities for women employees and additional allowances for project-based tasks. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT Here at XenonStack, we have a culture of cultivation with bold, courageous, and human-centric leadership principles. We value obsession and deep work in everything we do. We are on a mission to disrupt and reshape the category and welcome people with that mindset and ambition. If you are energised by the idea of shaping the future of AI in Business processes and enterprise systems, there’s nowhere better for you than XenonStack Product Value and Outcome - Simplifying the user experience with AI Agents and Agentic AI 1) Obsessed with Adoption : We design everything with the goal of making AI more accessible and simplifying the business processes and enterprise systems essential to adoption. 2) Obsessed with Simplicity : We simplify even the most complex challenges to create seamless, intuitive experiences with AI agents and Agentic AI. Be a part of XenonStack’s Vision and Mission for Accelerating the world's transition to AI + Human Intelligence.

Posted 3 days ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Mohali

On-site

GlassDoor logo

The Role As a DevOps Engineer , you will be an integral part of the product and service division, working closely with development teams to ensure seamless deployment, scalability, and reliability of our infrastructure. You'll help build and maintain CI/CD pipelines, manage cloud infrastructure, and contribute to system automation. Your work will directly impact the performance and uptime of our flagship product, BotPenguin. What you need for this role Education: Bachelor's degree in Computer Science, IT, or a related field. Experience: 2-5 years in DevOps or similar roles. Technical Skills: Proficiency in CI/CD tools like Jenkins, GitLab CI, or GitHub Actions. Experience with containerization and orchestration using Docker and Kubernetes. Strong understanding of cloud platforms, especially AWS & Azure. Familiarity with infrastructure as code tools such as Terraform or CloudFormation. Knowledge of monitoring and logging tools like Prometheus, Grafana, and ELK Stack. Good scripting skills in Bash, Python, or similar languages. Soft Skills: Detail-oriented with a focus on automation and efficiency. Strong problem-solving abilities and proactive mindset. Effective communication and collaboration skills. What you will be doing Build, maintain, and optimize CI/CD pipelines. Monitor and improve system performance, uptime, and scalability. Manage and automate cloud infrastructure deployments. Work closely with developers to support release processes and environments. Implement security best practices in deployment and infrastructure management. Ensure high availability and reliability of services. Document procedures and provide support for technical troubleshooting. Contribute to training junior team members, and assist HR and operations teams with tech-related concerns as required. Top reasons to work with us Be part of a cutting-edge AI startup driving innovation in chatbot automation. Work with a passionate and talented team that values knowledge-sharing and problem-solving. Growth-oriented environment with ample learning opportunities. Exposure to top-tier global clients and projects with real-world impact. A culture that fosters creativity, ownership, and collaboration. Job Type: Full-time Pay: ₹400,000.00 - ₹800,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Provident Fund Schedule: Day shift Ability to commute/relocate: Mohali, Punjab: Reliably commute or planning to relocate before starting work (Required) Experience: DevOps: 2 years (Required) Work Location: In person

Posted 3 days ago

Apply

0 years

0 - 0 Lacs

India

On-site

GlassDoor logo

DevOps Engineer – Intern Location :KIIT TBI, BHUBANESWAR Duration : 3–4 months About Us We’re looking for a passionate and self-motivated DevOps Intern to assist our engineering team in automating infrastructure and improving deployment pipelines. Key Responsibilities Assist in setting up and maintaining CI/CD pipelines using tools like GitHub Actions, GitLab CI, or Jenkins. Help manage infrastructure with tools like Docker, Kubernetes, and Terraform (under guidance). Support in automating routine development, build, and deployment tasks. Work with developers to ensure smooth deployments and rollback strategies. Monitor applications and infrastructure with basic observability tools (Grafana, Prometheus, etc.). Learn and apply DevOps best practices including version control, containerization, and scripting. Who You Are Currently pursuing or recently completed a degree in Computer Science, IT, or related field. Basic understanding of Linux, shell scripting, and version control (Git). Exposure to cloud platforms (AWS, Azure, or GCP) is a plus. Familiarity with Docker or Kubernetes is a bonus—not mandatory. Eager to learn and grow in a fast-paced DevOps/CloudOps environment. Good communication and collaboration skills. Nice to Have (Not Mandatory) Experience with a personal or academic project using DevOps tools. Participation in open-source or hackathon projects. What You’ll Gain Real-world exposure to DevOps practices in a production environment. Opportunity to convert to a full-time role based on performance. Experience with modern cloud-native tools and practices. Certificate and letter of recommendation upon successful completion. How to Apply Please share your: Resume GitHub/portfolio links (if available) A short note on why you’re interested in DevOps Job Types: Fresher, Internship Contract length: 3 months Pay: ₹5,000.00 - ₹6,000.00 per month Benefits: Flexible schedule Leave encashment Paid time off Provident Fund Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Quarterly bonus Yearly bonus Ability to commute/relocate: Patia, Bhubaneswar, Orissa: Reliably commute or planning to relocate before starting work (Required) Application Question(s): when can you join us if selected? this is urgent opening.. What is DevOps in your own words? What operating systems have you worked with (Linux, Windows, etc.)? Education: Bachelor's (Preferred) Location: Patia, Bhubaneswar, Orissa (Preferred) Work Location: In person Application Deadline: 28/06/2025 Expected Start Date: 30/06/2025

Posted 3 days ago

Apply

8.0 years

28 - 30 Lacs

Pune

On-site

GlassDoor logo

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 3 days ago

Apply

5.0 years

0 Lacs

India

On-site

GlassDoor logo

Job Description Job Title : DevOps Engineer Company Name : Web Minds IT Solution , Pune Employment Type: Full-time Experience : 5 – 10 yr Job Description : We are seeking an experienced DevOps Engineer to design, implement, and manage scalable infrastructure and CI/CD pipelines. You will work closely with development, QA, and operations teams to automate deployments, optimize cloud resources, and enhance system reliability. The ideal candidate has strong expertise in cloud platforms, containerization, and infrastructure as code. This role is key to driving DevOps best practices, improving delivery speed, and ensuring high system availability. Qualification : Bachelor’s degree in Computer Science, IT, or related field (required) . Master’s degree or relevant certifications (e.g., AWS, Kubernetes, Terraform). 5 to 10 years of proven experience in DevOps, infrastructure automation, and cloud environments. Experience with CI/CD, containerization, and infrastructure as code. Relevant certifications (e.g., AWS Certified DevOps Engineer, CKA/CKAD, Terraform Associate) Job Responsibilities:  Design, implement, and maintain enterprise-grade CI/CD pipelines for efficient software delivery  Manage and automate cloud infrastructure (AWS, Azure, or GCP) with strong emphasis on security, scalability, and cost-efficiency  Develop and maintain Infrastructure as Code (IaC) using tools like Terraform, Ansible, or CloudFormation  Orchestrate and manage containerized environments using Docker and Kubernetes  Implement and optimize monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK, Datadog)  Ensure high availability, disaster recovery, and performance tuning of systems  Collaborate with development, QA, and security teams to enforce DevSecOps best practices  Lead troubleshooting of complex infrastructure and deployment issues in production environments  Mentor junior team members and contribute to DevOps strategy and architecture Required Skills :  Strong hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.)  Proficient with cloud platforms (AWS preferred, Azure or GCP acceptable)  Expertise in Infrastructure as Code using Terraform, Ansible, or CloudFormation  Deep understanding of Docker and Kubernetes for containerization and orchestration  Strong scripting skills (Bash, Python, or Shell) for automation  Experience with monitoring, logging, and alerting tools (e.g., Prometheus, ELK, Grafana, CloudWatch)  Solid grasp of Linux system administration, networking concepts, and security best practices  Familiarity with version control tools like Git and branching strategies Soft Skills :  Strong problem-solving and analytical thinking  Excellent communication and collaboration skills  Ability to work in a fast-paced, dynamic environment  Proactive mindset with a focus on automation, reliability, and scalability Job Type: Full-time Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Work Location: In person Speak with the employer +91 8080963983

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Description Role Overview A Data Engineer is responsible for designing, building, and maintaining robust data pipelines and infrastructure that facilitate the collection, storage, and processing of large datasets. They collaborate with data scientists and analysts to ensure data is accessible, reliable, and optimized for analysis. Key tasks include data integration, ETL (Extract, Transform, Load) processes, and managing databases and cloud-based systems. Data engineers play a crucial role in enabling data-driven decision-making and ensuring data quality across organizations. What Will You Do In This Role Develop comprehensive High-Level Technical Design and Data Mapping documents to meet specific business integration requirements. Own the data integration and ingestion solutions throughout the project lifecycle, delivering key artifacts such as data flow diagrams and source system inventories. Provide end-to-end delivery ownership for assigned data pipelines, performing cleansing, processing, and validation on the data to ensure its quality. Define and implement robust Test Strategies and Test Plans, ensuring end-to-end accountability for middleware testing and evidence management. Collaborate with the Solutions Architecture and Business analyst teams to analyze system requirements and prototype innovative integration methods. Exhibit a hands-on leadership approach, ready to engage in coding, debugging, and all necessary actions to ensure the delivery of high-quality, scalable products. Influence and drive cross-product teams and collaboration while coordinating the execution of complex, technology-driven initiatives within distributed and remote teams. Work closely with various platforms and competencies to enrich the purpose of Enterprise Integration and guide their roadmaps to address current and emerging data integration and ingestion capabilities. Design ETL/ELT solutions, lead comprehensive system and integration testing, and outline standards and architectural toolkits to underpin our data integration efforts. Analyze data requirements and translate them into technical specifications for ETL processes. Develop and maintain ETL workflows, ensuring optimal performance and error handling mechanisms are in place. Monitor and troubleshoot ETL processes to ensure timely and successful data delivery. Collaborate with data analyst and other stakeholders to ensure alignment between data architecture and integration strategies. Document integration processes, data mappings, and ETL workflows to maintain clear communication and ensure knowledge transfer. What Should You Have Bachelor’s degree in information technology, Computer Science or any Technology stream 5+ years of working experience with enterprise data integration technologies – Informatica PowerCenter, Informatica Intelligent Data Management Cloud Services (CDI, CAI, Mass Ingest, Orchestration) Integration experience utilizing REST and Custom API integration Experiences in Relational Database technologies and Cloud Data stores from AWS, GCP & Azure Experience utilizing AWS cloud well architecture framework, deployment & integration and data engineering. Preferred experience with CI/CD processes and related tools including- Terraform, GitHub Actions, Artifactory etc. Proven expertise in Python and Shell scripting, with a strong focus on leveraging these languages for data integration and orchestration to optimize workflows and enhance data processing efficiency Extensive Experience in design of reusable integration pattern using the cloud native technologies Extensive Experience Process orchestration and Scheduling Integration Jobs in Autosys, Airflow. Experience in Agile development methodologies and release management techniques Excellent analytical and problem-solving skills Good Understanding of data modeling and data architecture principles Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business, Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Management Process, Social Collaboration, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R353285 Show more Show less

Posted 3 days ago

Apply

0 years

12 - 20 Lacs

India

On-site

GlassDoor logo

Backend & Frontend Expertise: Strong proficiency in Python and FastAPI for microservices. Strong in TypeScript/Node.js for GraphQL/RESTful API interfaces. ● Cloud & Infra Application: Hands-on AWS experience, proficient with existing Terraform. Working knowledge of Kubernetes/Argo CD for deployment/troubleshooting. ● CI/CD & Observability: Designs and maintains GitHub Actions pipelines. Implements OpenTelemetry for effective monitoring and debugging. ● System Design: Experience designing and owning specific microservices (APIs, data models, integrations). ● Quality & Testing: Drives robust unit, integration, and E2E testing. Leads code reviews. ● Mentorship: Guides junior engineers, leads technical discussions for features. Senior Engineers ● Python and FastAPI ● TypeScript and Node.js ● GraphQL/RESTful API interfaces ● AWS ● Terraform ● Working knowledge of Kubernetes/Argo CD for deployment/troubleshooting. ● CI/CD via GitHub Actions pipelines ● OpenTelemetry ● unit, integration, and E2E testing Job Type: Full-time Pay: ₹1,250,000.00 - ₹2,000,000.00 per year Benefits: Paid time off Schedule: Day shift Monday to Friday Work Location: In person

Posted 3 days ago

Apply

3.0 years

3 - 15 Lacs

Nāgpur

On-site

GlassDoor logo

Key Responsibilities: Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or Azure DevOps. Automate infrastructure deployment using tools such as Terraform, Ansible, or CloudFormation. Work with cloud platforms (AWS, Azure, GCP) to manage services, resources, and configurations. Develop and maintain Docker containers and manage Kubernetes clusters (EKS, AKS, GKE). Monitor application and infrastructure performance using tools like Prometheus, Grafana, ELK, or CloudWatch. Collaborate with developers, QA, and other teams to ensure smooth software delivery and operations. Troubleshoot and resolve infrastructure and deployment issues in development, staging, and production. Maintain security, backup, and redundancy strategies for critical infrastructure. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 3 to 5 years of experience in a DevOps role. Experience with one or more cloud platforms: AWS, Azure, or GCP. Proficiency in scripting languages: Bash, Python, or PowerShell. Hands-on experience with containerization (Docker) and orchestration (Kubernetes). Experience with configuration management and Infrastructure as Code tools. Solid understanding of networking, firewalls, load balancing, and monitoring. Strong analytical and troubleshooting skills. Good communication and collaboration abilities. Azure, Docker, Kubernetes, Terraform, Jenkins, CI/CD Pipelines, Linux, Git Preferred Qualifications: Certifications in AWS, Azure, Kubernetes, or related DevOps tools. Familiarity with GitOps practices. Exposure to security best practices in DevOps. Job Type: Full-time Pay: ₹390,210.46 - ₹1,566,036.44 per year Benefits: Health insurance Provident Fund Schedule: Rotational shift Work Location: In person Speak with the employer +91 8369431086

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Position Summary AWS DevSecOps Engineer – CL4 Role Overview : As a DevSecOps Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments while leading the implementation of deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess intermediary knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate intermediate level understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers. Strong preference will be given to candidates with AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302803 Show more Show less

Posted 3 days ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As an AWS Infrastructure Engineer, you play a crucial role in building, and maintaining a cloud infrastructure on Amazon Web Services (AWS). You will also be responsible for the ownership of tasks assigned through SNOW, Dashboard, Order forms etc. The work you will do includes: Build and operate the Cloud infrastructure on AWS Continuously monitoring the health and performance of the infrastructure and resolving any issues. Using tools like CloudFormation, Terraform, or Ansible to automate infrastructure provisioning and configuration. Administer the EC2 instance’s OS such as Windows and Linux Working with other teams to deploy secure, scalable, and cost-effective cloud solutions based on AWS services. Implement monitoring and logging for Infra and Apps Keeping the infrastructure up-to-date with the latest security patches and software versions. Collaborate with development, operations and Security teams to establish best practices for software development, build, deployment, and infrastructure management Tasks related to IAM, Monitoring, Backup and Vulnerability Remediation Participating in performance testing and capacity planning activities Documentation, Weekly/Bi-Weekly Deck preparation, KB article update Handover and On call support during weekends on rotational basis Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in AWS Cloud, Cloud Formation template, Windows/Linux administration Understanding of 2 tier, 3 tier or multi-tier architecture Experience on IaaS/PaaS/SaaS Understanding of Disaster recovery Networking and security expertise Knowledge on PowerShell, Shell and Python Associate/Professional level certification on AWS solution architecture ITIL Foundational certification Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Understanding of container technologies such as Docker, Kubernetes and OpenShift. Understanding of Application and other infrastructure monitoring tools Understanding of end-to-end infrastructure landscape Experience on virtualization platform Knowledge on Chef, Puppet, Bamboo, Concourse etc Knowledge on Microservices, DataLake, Machine learning etc Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with AWS, System administration, IaC etc Location: Hyderabad/ Pune The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302308 Show more Show less

Posted 3 days ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Gurugram

Remote

Naukri logo

Candidates can share CV at aishwarya.joshi@espire.com Role Description: The ideal candidate will have a strong background in implementation, deployment, maintenance, and monitoring of Azure infrastructure. This role requires a hands-on expert who can lead complex projects, troubleshoot critical issues, and ensure the smooth operation of Azure-based environments. Relevant Experience: Azure cloud, IAAS, PAAS, Azure Devops, Terraform, Kubernetes Key Responsibilities: 1. Azure Landing Zone • Design and implement Azure Landing Zones to establish a scalable and secure foundation for cloud adoption. • Configure Azure Resource Groups, Policies, and Role-Based Access Control (RBAC) to align with organizational governance. • Deploy and manage networking components, including Virtual Networks, Subnets, and Network Security Groups (NSGs). • Establish connectivity between on-premises and cloud environments using Azure ExpressRoute or VPN Gateway. • Incorporate management groups and subscriptions to create a modular and consistent environment. 2. Automation • Develop Infrastructure as Code (IaC) templates using Terraform, Azure Resource Manager (ARM), or Bicep. • Automate routine maintenance tasks such as backups, patching, and scaling using Azure Automation or Logic Apps. • Implement deployment pipelines for continuous integration and delivery (CI/CD) with Azure DevOps or GitHub Actions. • Schedule and automate cost optimization tasks, including resource cleanup and tagging enforcement. • Leverage Azure Functions to streamline serverless operations for event-driven workflows. 3. Monitoring and Log Management • Configure Azure Monitor to collect and analyse metrics for performance and health monitoring. • Implement Azure Log Analytics and Kusto Query Language (KQL) for centralized log aggregation and analysis. • Set up Application Insights for end-to-end performance monitoring and diagnostics of applications. • Establish alerting mechanisms for proactive identification and resolution of issues. • Ensure compliance by implementing and maintaining audit logs with Azure Policy and Security Centre

Posted 3 days ago

Apply

8.0 - 12.0 years

15 - 29 Lacs

India

On-site

GlassDoor logo

Job Title: Technical Lead Experience: 8 to 12 Years Location: Chennai Domain : BFSI Job Summary: We are seeking a versatile and highly skilled Senior Software Engineer with expertise in full stack development, mobile application development using Flutter, and backend systems using Java/Spring Boot. The ideal candidate will have strong experience across modern development stacks, cloud platforms (AWS), containerization, and CI/CD pipelines. Key Responsibilities: Design and develop scalable web, mobile, and backend applications. Build high-quality, performant cross-platform mobile apps using Flutter and Dart. Develop RESTful APIs and services using Node.js/Express and Java/Spring Boot. Integrate frontend components with backend logic and databases (Oracle, PostgreSQL, MongoDB). Work with containerization tools like Docker and orchestration platforms like Kubernetes or ROSA. Leverage AWS cloud services for deployment, scalability, and monitoring (e.g., EC2, S3, RDS, Lambda). Collaborate with cross-functional teams including UI/UX, QA, DevOps, and product managers. Participate in Agile ceremonies, code reviews, unit/integration testing, and performance tuning. Maintain secure coding practices and ensure compliance with security standards. Required Skills & Qualifications: Strong programming in Java (Spring Boot), Node.js, and React.js. Proficiency in Flutter & Dart for mobile development. Experience with REST APIs, JSON, and third-party integrations. Hands-on experience with cloud platforms (preferably AWS). Strong skills in databases such as Oracle, PostgreSQL, MongoDB. Experience with Git, CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Familiarity with containerization using Docker and orchestration via Kubernetes. Knowledge of secure application development (OAuth, JWT, encryption). Solid understanding of Agile/Scrum methodologies. Preferred Qualifications: Experience with Firebase, messaging queues (Kafka/RabbitMQ), and server-side rendering (Next.js). Familiarity with DevOps practices, infrastructure as code (Terraform/CloudFormation), and observability tools (Prometheus, ELK). Exposure to platform-specific integrations for Android/iOS through native channels. Understanding of App Store / Play Store deployment. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. Job Types: Full-time, Permanent Pay: ₹1,575,371.85 - ₹2,989,972.99 per year Benefits: Health insurance Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): How many years of experience in Java? How many years of experience in Flutter? How many years of experience in CRM Model ? What's your notice period ? Work Location: In person

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Here at Appian, our core values of Respect, Work to Impact, Ambition, and Constructive Dissent & Resolution define who we are. In short, this means we constantly seek to understand the best for our customers, we go beyond completion in our work, we strive for excellence with intensity, and we embrace candid communication. These values guide our actions and shape our culture every day. When you join Appian, you'll be part of a passionate team that's dedicated to accomplishing hard things. As a DevOps & Test Infrastructure Engineer your goal is to design, implement, and maintain a robust, scalable, and secure AWS infrastructure to support our growing testing needs. You will be instrumental in building and automating our DevOps pipeline, ensuring efficient and reliable testing processes. This role offers the opportunity to shape our performance testing environment and contribute directly to the quality and speed of our clients' Appian software delivery. Responsibilities Architecture Design: Design and architect a highly scalable and cost-effective AWS infrastructure tailored for testing purposes, considering security, performance, and maintainability. DevOps Pipeline Design: Architect a secure and automated DevOps pipeline on AWS, integrating tools such as Jenkins for continuous integration/continuous delivery (CI/CD) and Locust for performance testing. Infrastructure as Code (IaC): Implement infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation to enable automated deployment and scaling of the testing environment. Security Implementation: Implement and enforce security best practices across the AWS infrastructure and DevOps pipeline, ensuring compliance and protecting sensitive data. Jenkins or similar CI/CD automation platforms Configuration & Administration: Install, configure, and administer Jenkins, including setting up build pipelines, managing plugins, and ensuring its scalability and reliability. Locust Configuration & Administration: Install, configure, and administer Locust for performance and load testing. Automation: Automate the deployment, scaling, and management of all infrastructure components and the DevOps pipeline. Monitoring and Logging: Implement comprehensive monitoring and logging solutions to proactively identify and resolve issues within the testing environment, including also exposing testing results available for consumption. Troubleshooting and Support: Provide expert-level troubleshooting and support for the testing infrastructure and DevOps pipeline. Collaboration: Work closely with development, QA, and operations teams to understand their needs and provide effective solutions. Documentation: Create and maintain clear and concise documentation for the infrastructure, pipeline, and processes. Continuous Improvement: Stay up-to-date with the latest AWS services and DevOps best practices, and proactively identify opportunities for improvement. Qualifications Proven experience in designing and implementing scalable architectures on Amazon Web Services (AWS). Strong understanding of DevOps principles and practices. Hands-on experience with CI/CD tools, for example Jenkins, including pipeline creation and administration. Experience with performance testing tools, preferably Locust, including test design and execution. Proficiency in infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Solid understanding of security best practices in cloud environments. Experience with containerization technologies like Docker and orchestration tools like Kubernetes or AWS ECS (preferred). Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack, CloudWatch). Excellent scripting skills (e.g., Python, Bash). Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. AWS certifications (e.g., AWS Certified Solutions Architect – Associate/Professional, AWS Certified DevOps Engineer – Professional). Experience with other testing tools and frameworks. Experience with agile development methodologies. Education B.S. in Computer Science, Engineering, Information Systems, or related field. Working Conditions Opportunity to work on enterprise-scale applications across different industries. This role is based at our office at WTC 11th floor, Old Mahabalipuram Road, SH 49A, Kandhanchavadi, Kottivakkam, Chennai, Tamil Nadu 600041, India. Appian was built on a culture of in-person collaboration, which we believe is a key driver of our mission to be the best. Employees hired for this position are expected to be in the office 5 days a week to foster that culture and ensure we continue to thrive through shared ideas and teamwork. We believe being in the office provides more opportunities to come together and celebrate working with the exceptional people across Appian. Tools and Resources Training and Development: During onboarding, we focus on equipping new hires with the skills and knowledge for success through department-specific training. Continuous learning is a central focus at Appian, with dedicated mentorship and the First-Friend program being widely utilized resources for new hires. Growth Opportunities: Appian provides a diverse array of growth and development opportunities, including our leadership program tailored for new and aspiring managers, a comprehensive library of specialized department training through Appian University, skills based training, and tuition reimbursement for those aiming to advance their education. This commitment ensures that employees have access to a holistic range of development opportunities. Community: We'll immerse you into our community rooted in respect starting on day one. Appian fosters inclusivity through our 8 employee-led affinity groups. These groups help employees build stronger internal and external networks by planning social, educational, and outreach activities to connect with Appianites and larger initiatives throughout the company. About Appian Appian is a software company that automates business processes. The Appian AI-Powered Process Platform includes everything you need to design, automate, and optimize even the most complex processes, from start to finish. The world's most innovative organizations trust Appian to improve their workflows, unify data, and optimize operations—resulting in better growth and superior customer experiences. For more information, visit appian.com. [Nasdaq: APPN] Follow Appian: Twitter, LinkedIn. Appian is an equal opportunity employer that strives to attract and retain the best talent. All qualified applicants will receive consideration for employment without regard to any characteristic protected by applicable federal, state, or local law. Appian provides reasonable accommodations to applicants in accordance with all applicable laws. If you need a reasonable accommodation for any part of the employment process, please contact us by email at ReasonableAccommodations@appian.com . Please note that only inquiries concerning a request for reasonable accommodation will be responded to from this email address. Appian's Applicant & Candidate Privacy Notice

Posted 3 days ago

Apply

4.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Job Highlights: We are seeking a skilled and passionate individual with a minimum of 4 years as a Solution Architect with expertise in full-stack development and a deep understanding of AWS cloud technologies. The ideal candidate will have hands-on experience across a range of backend and frontend technologies and be adapt at designing secure, scalable, and high-performance cloud-native applications. Managed end-to-end hosting of both frontend and backend components, ensuring 360-degree application support and performance optimization as a Solutions Architect. If you're driven by innovation and eager to contribute to digital transformation initiatives, we invite you to apply. Key Responsibilities: Design, develop, and deploy web and mobile applications using Spring Boot, Java, Python, Node.js (backend), Angular, Ionic, TypeScript, HTML5, CSS3, Bootstrap, React, React Native (frontend), MEAN and MERN, MongoDB Architect and manage cloud solutions using AWS and Microsoft Azure services such as EC2, Lambda, S3, RDS, ECS, etc. other cloud technologies will be an added advantage Develop and manage CI/CD pipelines using GitLab to automate testing and deployment. Automate infrastructure provisioning and management using Terraform (IaC). Build modern, responsive UIs with Bootstrap and Ionic. Implement best practices for cloud security, performance optimization, and monitoring. Manage end-to-end database operations, including schema design, query optimization, indexing, and security. Work closely with stakeholders to shape cloud strategy, evaluate modernization opportunities, and present business cases. Create technical documentation and deliver effective presentations for internal and client-facing teams. Lead the integration of AI/ML models and Large Language Models (LLMs) into real-world applications. Collaborate with cross-functional teams, including UI/UX, DevOps, QA, and Data Science. Managing the team and handled a minimum 5 projects. Strong hands-on experience in: Spring Boot, Angular, Ionic, Bootstrap, Flutter AWS Core Services (EC2, S3, Lambda, RDS, ECS, IAM, etc.), Microsoft Azure Terraform for infrastructure automation CI/CD pipelines (preferably GitLab) Database systems: MySQL, PostgreSQL, DynamoDB, MongoDB, etc. Testing Tools: Swagger (OpenAPI), Postman Preferred Qualifications: Bachelor's or Master's degree in Computer Science/Engineering or equivalent. Strong leadership skills and experience mentoring tech teams. Clear understanding of microservices, RESTful APIs, and security protocols. Experience with containerization (Docker, Kubernetes) is a plus. Certifications in AWS, Azure, or relevant fields are preferred. Job Types: Full-time, Internship Benefits: Paid time off Schedule: Day shift Education: Master's (Preferred) Experience: Solution architect: 4 years (Preferred) Work Location: In person

Posted 3 days ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description: We are seeking a highly skilled and motivated Google Cloud Engineer to join our dynamic engineering team. In this role, you will be instrumental in designing, building, deploying, and maintaining our cloud infrastructure and applications on Google Cloud Platform (GCP). You will work closely with development, operations, and security teams to ensure our cloud environment is scalable, secure, highly available, and cost-optimized. If you are passionate about cloud native technologies, automation, and solving complex infrastructure challenges, we encourage you to apply.. What You Will Do Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP) using Infrastructure as Code (IaC) tools like Terraform. Deploy, configure, and manage core GCP services such as Compute Engine, Kubernetes Engine (GKE), Cloud SQL, Cloud Storage, Cloud Functions, BigQuery, Pub/Sub, and networking components (VPC, Cloud Load Balancing, Cloud CDN). Develop and maintain CI/CD pipelines for automated deployment and release management using tools like Cloud Build, GitLab CI/CD, GitHub Actions or Jenkins. Implement and enforce security best practices within the GCP environment, including IAM, network security, data encryption, and compliance adherence. Monitor cloud infrastructure and application performance, identify bottlenecks, and implement solutions for optimization and reliability. Troubleshoot and resolve complex infrastructure and application issues in production and non-production environments. Collaborate with development teams to ensure applications are designed for cloud-native deployment, scalability, and resilience. Participate in on-call rotations for critical incident response and provide timely resolution to production issues. Create and maintain comprehensive documentation for cloud architecture, configurations, and operational procedures. Stay current with new GCP services, features, and industry best practices, proposing and implementing improvements as appropriate. Contribute to cost optimization efforts by identifying and implementing efficiencies in cloud resource utilization. What Experience You Need Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 6+ years of hands-on experience with C#, .NET Core, .NET Framework, MVC, Web API, Entity Framework, and SQL Server. 3+ years of experience with cloud platforms (GCP preferred), including designing and deploying cloud-native applications. 3+ years of experience with source code management (Git), CI/CD pipelines, and Infrastructure as Code. Strong experience with Javascript and a modern Javascript framework, VueJS preferred. Proven ability to lead and mentor development teams. Strong understanding of microservices architecture and serverless computing. Experience with relational databases (SQL Server, PostgreSQL). Excellent problem-solving, analytical, and communication skills. Experience working in Agile/Scrum environments. What Could Set You Apart GCP Cloud Certification. UI development experience (e.g., HTML, JavaScript, Angular, Bootstrap) Experience in Agile environments (e.g., Scrum, XP) Relational database experience (e.g., SQL Server, PostgreSQL) Experience with Atlassian tooling (e.g., JIRA, Confluence, and Github) Working knowledge of Python Excellent problem-solving and analytical skills and the ability to work well in a team Show more Show less

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes Show more Show less

Posted 3 days ago

Apply

8.0 years

28 - 30 Lacs

Chennai

On-site

GlassDoor logo

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 3 days ago

Apply

0 years

0 - 0 Lacs

Coimbatore

On-site

GlassDoor logo

We are seeking AWS Cloud DEVOPS Engineers, who will be part of the Engineering team and collaborating with software development, quality assurance, and IT operations teams to deploy and maintain production systems in the cloud. This role requires a engineer who is passionate about provisioning and maintaining a reliable, secure, and scalable production systems. We are a small team of highly skilled engineers and looking forward to adding a new member who wishes to advance in one's career by continuous learning. Selected candidates will be an integral part of a team of passionate and enthusiastic IT professionals, and have tremendous opportunities to contribute to the success of the products. What you will do Ideal candidate will be responsible for Deploying, automating, maintaining, managing and monitoring an AWS production system including software applications and cloud-based infrastructure Monitor system performance and troubleshoot issues Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS ) Use DevOps principles and methodologies to enable the rapid deployment of software and services by coordinating software development, quality assurance, and IT operations Making sure AWS production systems are reliable, secure, and scalable Create and enforce policies related to AWS usage including sample tagging, instance type usage, data storage Resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques Automating different operational processes by designing, maintaining, and managing tools Provide primary operational support and engineering for all Cloud and Enterprise deployments Lead the organisations platform security efforts by collaborating with the core engineering team Design, build, and maintain containerization using Docker, and manage container orchestration with Kubernetes Set up monitoring, alerting, and logging tools (e.g., zabbix) to ensure system reliability. Collaborate with development, QA, and operations teams to design and implement CI/CD pipelines with Jenkins Develop policies, standards, and guidelines for IAC and CI/CD that teams can follow Automate and optimize infrastructure tasks using tools like Terraform, Ansible, or CloudFormation. Support InfoSec scans and compliance audits Ensure security best practices in the cloud environment, including IAM management, security groups, and network firewalls. Contribute sto the optimization of system performance and cost. Promotes knowledge sharing activities within and across different product teams by creating and engaging in communities of practice and through documentation, training, and mentoring Keep skills up to date through ongoing self-directed training What skills are required Ability to learn new technologies quickly. Ability to work both independently and in collaborative teams to communicate design and build ideas effectively. Problem-solving, and critical-thinking skills including ability to organize, analyze, interpret, and disseminate information. Excellent spoken and written communication skills Must be able to work as part of a diverse team, as well as independently Ability to follow departmental and organizational processes and meet established goals and deadlines Knowledge of EC2 (Auto scaling, Security Groups ),VPC,SQS, SNS,Route53,RDS, S3, Elastic Cache, IAM, CLI Server setup/configuration (Tomcat,ngnix ) Experience with AWS—including EC2, S3, CloudTrail, and APIs Solid understanding of EC2 On-Demand, Spot Market, and Reserved Instances knowledge of Infrastructure As Code tools including Terraform, Ansible, or CloudFormation Knowledge of scripting and automation using Python, Bash, Perl to automate AWS tasks Knowledge of code deployment tools Ansible and CloudFormation scripts Support InfoSec scans and compliance audits Basic knowledge of network architecture, DNS, and load balancing. knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes Understanding of monitoring and logging tools(e.g. zabbix) Familiarity with version control systems (GIT) Knowledge of microservices architecture and deployment. Bachelor's degree in Engineering or Masters degree in computer science. Note : Candidates who have passed out in the year 2023 or 2024 can only apply for this Internship. This is Internship to Hire position and Candidates who complete the internship will be offered full-time position based on performance Job Types: Full-time, Permanent, Fresher, Internship Contract length: 6 months Pay: ₹5,500.00 - ₹7,000.00 per month Schedule: Day shift Monday to Friday Morning shift Expected Start Date: 01/07/2025

Posted 3 days ago

Apply

5.0 years

3 - 7 Lacs

Ahmedabad

On-site

GlassDoor logo

Location: Ahmedabad / Pune Required Experience: 5+ Years Preferred Immediate Joiner We are looking for a highly skilled Lead Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: Design, implement, and optimize data pipelines and workflows using Apache Airflow Develop incremental and full-load strategies with monitoring, retries, and logging Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage Develop and maintain data warehouses in Snowflake Ensure data quality, integrity, and reliability through validation frameworks and automated testing Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. Monitor job performance and resolve data pipeline issues proactively Build and maintain data quality frameworks (null checks, type checks, threshold alerts). Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills & Qualifications: Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) Proficiency in SQL, Spark and Python Experience building data pipelines on cloud platforms like AWS, GCP, or Azure Strong knowledge of data warehousing concepts and ELT best practices Familiarity with version control systems (e.g., Git) and CI/CD practices Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving skills and the ability to work independently. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies