Home
Jobs

7796 Terraform Jobs - Page 35

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

India

On-site

GlassDoor logo

Job Description Job Title : DevOps Engineer Company Name : Web Minds IT Solution , Pune Employment Type: Full-time Experience : 5 – 10 yr Job Description : We are seeking an experienced DevOps Engineer to design, implement, and manage scalable infrastructure and CI/CD pipelines. You will work closely with development, QA, and operations teams to automate deployments, optimize cloud resources, and enhance system reliability. The ideal candidate has strong expertise in cloud platforms, containerization, and infrastructure as code. This role is key to driving DevOps best practices, improving delivery speed, and ensuring high system availability. Qualification : Bachelor’s degree in Computer Science, IT, or related field (required) . Master’s degree or relevant certifications (e.g., AWS, Kubernetes, Terraform). 5 to 10 years of proven experience in DevOps, infrastructure automation, and cloud environments. Experience with CI/CD, containerization, and infrastructure as code. Relevant certifications (e.g., AWS Certified DevOps Engineer, CKA/CKAD, Terraform Associate) Job Responsibilities:  Design, implement, and maintain enterprise-grade CI/CD pipelines for efficient software delivery  Manage and automate cloud infrastructure (AWS, Azure, or GCP) with strong emphasis on security, scalability, and cost-efficiency  Develop and maintain Infrastructure as Code (IaC) using tools like Terraform, Ansible, or CloudFormation  Orchestrate and manage containerized environments using Docker and Kubernetes  Implement and optimize monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK, Datadog)  Ensure high availability, disaster recovery, and performance tuning of systems  Collaborate with development, QA, and security teams to enforce DevSecOps best practices  Lead troubleshooting of complex infrastructure and deployment issues in production environments  Mentor junior team members and contribute to DevOps strategy and architecture Required Skills :  Strong hands-on experience with CI/CD tools (Jenkins, GitLab CI, GitHub Actions, etc.)  Proficient with cloud platforms (AWS preferred, Azure or GCP acceptable)  Expertise in Infrastructure as Code using Terraform, Ansible, or CloudFormation  Deep understanding of Docker and Kubernetes for containerization and orchestration  Strong scripting skills (Bash, Python, or Shell) for automation  Experience with monitoring, logging, and alerting tools (e.g., Prometheus, ELK, Grafana, CloudWatch)  Solid grasp of Linux system administration, networking concepts, and security best practices  Familiarity with version control tools like Git and branching strategies Soft Skills :  Strong problem-solving and analytical thinking  Excellent communication and collaboration skills  Ability to work in a fast-paced, dynamic environment  Proactive mindset with a focus on automation, reliability, and scalability Job Type: Full-time Schedule: Day shift Fixed shift Supplemental Pay: Performance bonus Work Location: In person Speak with the employer +91 8080963983

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Description Role Overview A Data Engineer is responsible for designing, building, and maintaining robust data pipelines and infrastructure that facilitate the collection, storage, and processing of large datasets. They collaborate with data scientists and analysts to ensure data is accessible, reliable, and optimized for analysis. Key tasks include data integration, ETL (Extract, Transform, Load) processes, and managing databases and cloud-based systems. Data engineers play a crucial role in enabling data-driven decision-making and ensuring data quality across organizations. What Will You Do In This Role Develop comprehensive High-Level Technical Design and Data Mapping documents to meet specific business integration requirements. Own the data integration and ingestion solutions throughout the project lifecycle, delivering key artifacts such as data flow diagrams and source system inventories. Provide end-to-end delivery ownership for assigned data pipelines, performing cleansing, processing, and validation on the data to ensure its quality. Define and implement robust Test Strategies and Test Plans, ensuring end-to-end accountability for middleware testing and evidence management. Collaborate with the Solutions Architecture and Business analyst teams to analyze system requirements and prototype innovative integration methods. Exhibit a hands-on leadership approach, ready to engage in coding, debugging, and all necessary actions to ensure the delivery of high-quality, scalable products. Influence and drive cross-product teams and collaboration while coordinating the execution of complex, technology-driven initiatives within distributed and remote teams. Work closely with various platforms and competencies to enrich the purpose of Enterprise Integration and guide their roadmaps to address current and emerging data integration and ingestion capabilities. Design ETL/ELT solutions, lead comprehensive system and integration testing, and outline standards and architectural toolkits to underpin our data integration efforts. Analyze data requirements and translate them into technical specifications for ETL processes. Develop and maintain ETL workflows, ensuring optimal performance and error handling mechanisms are in place. Monitor and troubleshoot ETL processes to ensure timely and successful data delivery. Collaborate with data analyst and other stakeholders to ensure alignment between data architecture and integration strategies. Document integration processes, data mappings, and ETL workflows to maintain clear communication and ensure knowledge transfer. What Should You Have Bachelor’s degree in information technology, Computer Science or any Technology stream 5+ years of working experience with enterprise data integration technologies – Informatica PowerCenter, Informatica Intelligent Data Management Cloud Services (CDI, CAI, Mass Ingest, Orchestration) Integration experience utilizing REST and Custom API integration Experiences in Relational Database technologies and Cloud Data stores from AWS, GCP & Azure Experience utilizing AWS cloud well architecture framework, deployment & integration and data engineering. Preferred experience with CI/CD processes and related tools including- Terraform, GitHub Actions, Artifactory etc. Proven expertise in Python and Shell scripting, with a strong focus on leveraging these languages for data integration and orchestration to optimize workflows and enhance data processing efficiency Extensive Experience in design of reusable integration pattern using the cloud native technologies Extensive Experience Process orchestration and Scheduling Integration Jobs in Autosys, Airflow. Experience in Agile development methodologies and release management techniques Excellent analytical and problem-solving skills Good Understanding of data modeling and data architecture principles Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business, Business Intelligence (BI), Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Design Applications, Information Management, Management Process, Social Collaboration, Software Development, Software Development Life Cycle (SDLC), System Designs Preferred Skills Job Posting End Date 07/31/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R353285 Show more Show less

Posted 3 days ago

Apply

0 years

12 - 20 Lacs

India

On-site

GlassDoor logo

Backend & Frontend Expertise: Strong proficiency in Python and FastAPI for microservices. Strong in TypeScript/Node.js for GraphQL/RESTful API interfaces. ● Cloud & Infra Application: Hands-on AWS experience, proficient with existing Terraform. Working knowledge of Kubernetes/Argo CD for deployment/troubleshooting. ● CI/CD & Observability: Designs and maintains GitHub Actions pipelines. Implements OpenTelemetry for effective monitoring and debugging. ● System Design: Experience designing and owning specific microservices (APIs, data models, integrations). ● Quality & Testing: Drives robust unit, integration, and E2E testing. Leads code reviews. ● Mentorship: Guides junior engineers, leads technical discussions for features. Senior Engineers ● Python and FastAPI ● TypeScript and Node.js ● GraphQL/RESTful API interfaces ● AWS ● Terraform ● Working knowledge of Kubernetes/Argo CD for deployment/troubleshooting. ● CI/CD via GitHub Actions pipelines ● OpenTelemetry ● unit, integration, and E2E testing Job Type: Full-time Pay: ₹1,250,000.00 - ₹2,000,000.00 per year Benefits: Paid time off Schedule: Day shift Monday to Friday Work Location: In person

Posted 3 days ago

Apply

3.0 years

3 - 15 Lacs

Nāgpur

On-site

GlassDoor logo

Key Responsibilities: Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or Azure DevOps. Automate infrastructure deployment using tools such as Terraform, Ansible, or CloudFormation. Work with cloud platforms (AWS, Azure, GCP) to manage services, resources, and configurations. Develop and maintain Docker containers and manage Kubernetes clusters (EKS, AKS, GKE). Monitor application and infrastructure performance using tools like Prometheus, Grafana, ELK, or CloudWatch. Collaborate with developers, QA, and other teams to ensure smooth software delivery and operations. Troubleshoot and resolve infrastructure and deployment issues in development, staging, and production. Maintain security, backup, and redundancy strategies for critical infrastructure. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 3 to 5 years of experience in a DevOps role. Experience with one or more cloud platforms: AWS, Azure, or GCP. Proficiency in scripting languages: Bash, Python, or PowerShell. Hands-on experience with containerization (Docker) and orchestration (Kubernetes). Experience with configuration management and Infrastructure as Code tools. Solid understanding of networking, firewalls, load balancing, and monitoring. Strong analytical and troubleshooting skills. Good communication and collaboration abilities. Azure, Docker, Kubernetes, Terraform, Jenkins, CI/CD Pipelines, Linux, Git Preferred Qualifications: Certifications in AWS, Azure, Kubernetes, or related DevOps tools. Familiarity with GitOps practices. Exposure to security best practices in DevOps. Job Type: Full-time Pay: ₹390,210.46 - ₹1,566,036.44 per year Benefits: Health insurance Provident Fund Schedule: Rotational shift Work Location: In person Speak with the employer +91 8369431086

Posted 3 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Position Summary AWS DevSecOps Engineer – CL4 Role Overview : As a DevSecOps Engineer , you will actively engage in your engineering craft, taking a hands-on approach to multiple high-visibility projects. Your expertise will be pivotal in delivering solutions that delight customers and users, while also driving tangible value for Deloitte's business investments. You will leverage your extensive DevSecOps engineering craftsmanship and advanced proficiency across multiple programming languages, DevSecOps tools, and modern frameworks, consistently demonstrating your strong track record in delivering high-quality, outcome-focused CI/CD and automation solutions. The ideal candidate will be a dependable team player, collaborating with cross-functional teams to design, develop, and deploy advanced software solutions. Key Responsibilities : Outcome-Driven Accountability: Embrace and drive a culture of accountability for customer and business outcomes. Develop DevSecOps engineering solutions that solve complex automation problems with valuable outcomes, ensuring high-quality, lean, resilient and secure pipelines with low operating costs, meeting platform/technology KPIs. Technical Leadership and Advocacy: Serve as the technical advocate for DevSecOps modern practices, ensuring integrity, feasibility, and alignment with business and customer goals, NFRs, and applicable automation/integration/security practices—being responsible for designing and maintaining code repos, CI/CD pipelines, integrations (code quality, QE automation, security, etc.) and environments (sandboxes, dev, test, stage, production) through IaC, both for custom and package solutions, including identifying, assessing, and remediating vulnerabilities. Engineering Craftsmanship: Maintain accountability for the integrity and design of DevSecOps pipelines and environments while leading the implementation of deployment techniques like Blue-Green, Canary to minimize down-time and enable A/B testing. Be always hands-on and actively engage with engineers to ensure DevSecOps practices are understood and can be implemented throughout the product development life cycle. Resolve any technical issues from implementation to production operations (e.g., leading triage and troubleshooting production issues). Be self-driven to learn new technologies, experiment with engineers, and inspire the team to learn and drive application of those new technologies. Customer-Centric Engineering: Develop lean, and yet scalable and flexible, DevSecOps automations through rapid, inexpensive experimentation to solve customer needs, enabling version control, security, logging, feedback loops, continuous delivery, etc. Engage with customers and product teams to deliver the right automation, security, and deployment practices. Incremental and Iterative Delivery: Adopt a mindset that favors action and evidence over extensive planning. Utilize a leaning-forward approach to navigate complexity and uncertainty, delivering lean, supportable, and maintainable solutions. Cross-Functional Collaboration and Integration: Work collaboratively with empowered, cross-functional teams including product management, experience, engineering, delivery, infrastructure, and security. Integrate diverse perspectives to make well-informed decisions that balance feasibility, viability, usability, and value. Support a collaborative environment that enhances team synergy and innovation. Advanced Technical Proficiency: Possess intermediary knowledge in modern software engineering practices and principles, including Agile methodologies, DevSecOps, Continuous Integration/Continuous Deployment. Strive to be a role model, leveraging these techniques to optimize solutioning and product delivery, ensuring high-quality outcomes with minimal waste. Demonstrate intermediate level understanding of the product development lifecycle, from conceptualization and design to implementation and scaling, with a focus on continuous improvement and learning. Domain Expertise: Quickly acquire domain-specific knowledge relevant to the business or product. Translate business/user needs into technical requirements and automations. Learn to navigate various enterprise functions such as product, experience, engineering, compliance, and security to drive product value and feasibility. Effective Communication and Influence: Exhibit exceptional communication skills, capable of articulating technical concepts clearly and compellingly. Support teammates and product teams through well-structured arguments and trade-offs supported by evidence, evaluations, and research. Learn to create a coherent narrative that align technical solutions with business objectives. Engagement and Collaborative Co-Creation: Able to engage and collaborate with product engineering teams, including customers as needed. Able to build and maintain constructive relationships, fostering a culture of co-creation and shared momentum towards achieving product goals. Support diverse perspectives and consensus to create feasible solutions. The team : US Deloitte Technology Product Engineering has modernized software and product delivery, creating a scalable, cost-effective model that focuses on value/outcomes by leveraging a progressive and responsive talent structure. As Deloitte’s primary internal development team, Product Engineering delivers innovative digital solutions to businesses, service lines, and internal operations with proven bottom-line results and outcomes. It helps power Deloitte’s success. It is the engine that drives Deloitte, serving many of the world’s largest, most respected companies. We develop and deploy cutting-edge internal and go-to-market solutions that help Deloitte operate effectively and lead in the market. Our reputation is built on a tradition of delivering with excellence. Key Qualifications : A bachelor’s degree in computer science, software engineering, or a related discipline. An advanced degree (e.g., MS) is preferred but not required. Experience is the most relevant factor. Strong software engineering foundation with deep understanding of OOP/OOD, functional programming, data structures and algorithms, software design patterns, code instrumentations, etc. 5+ years proven experience with Python, Bash, PowerShell, JavaScript, C#, and Golang (preferred). 5+ years proven experience with CI/CD tools (Azure DevOps and GitHub Enterprise) and Git (version control, branching, merging, handling pull requests) to automate build, test, and deployment processes. 5+ years of hands-on experience in security tools automation SAST/DAST (SonarQube, Fortify, Mend), monitoring/logging (Prometheus, Grafana, Dynatrace), and other cloud-native tools on AWS, Azure, and GCP. 5+ years of hands-on experience in using Infrastructure as Code (IaC) technologies like Terraform, Puppet, Azure Resource Manager (ARM), AWS Cloud Formation, and Google Cloud Deployment Manager. 2+ years of hands-on experience with cloud native services like Data Lakes, CDN, API Gateways, Managed PaaS, Security, etc. on multiple cloud providers like AWS, Azure and GCP is preferred. Strong understanding of methodologies like, XP, Lean, SAFe to deliver high quality products rapidly. General understanding of cloud providers security practices, database technologies and maintenance (e.g. RDS, DynamoDB, Redshift, Aurora, Azure SQL, Google Cloud SQL) General knowledge of networking, firewalls, and load balancers. Strong preference will be given to candidates with AI/ML and GenAI. Excellent interpersonal and organizational skills, with the ability to handle diverse situations, complex projects, and changing priorities, behaving with passion, empathy, and care. How You will Grow: At Deloitte, our professional development plans focus on helping people at every level of their career to identify and use their strengths to do their best work every day and excel in everything they do. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302803 Show more Show less

Posted 3 days ago

Apply

3.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Summary Position Summary CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As an AWS Infrastructure Engineer, you play a crucial role in building, and maintaining a cloud infrastructure on Amazon Web Services (AWS). You will also be responsible for the ownership of tasks assigned through SNOW, Dashboard, Order forms etc. The work you will do includes: Build and operate the Cloud infrastructure on AWS Continuously monitoring the health and performance of the infrastructure and resolving any issues. Using tools like CloudFormation, Terraform, or Ansible to automate infrastructure provisioning and configuration. Administer the EC2 instance’s OS such as Windows and Linux Working with other teams to deploy secure, scalable, and cost-effective cloud solutions based on AWS services. Implement monitoring and logging for Infra and Apps Keeping the infrastructure up-to-date with the latest security patches and software versions. Collaborate with development, operations and Security teams to establish best practices for software development, build, deployment, and infrastructure management Tasks related to IAM, Monitoring, Backup and Vulnerability Remediation Participating in performance testing and capacity planning activities Documentation, Weekly/Bi-Weekly Deck preparation, KB article update Handover and On call support during weekends on rotational basis Qualifications Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in AWS Cloud, Cloud Formation template, Windows/Linux administration Understanding of 2 tier, 3 tier or multi-tier architecture Experience on IaaS/PaaS/SaaS Understanding of Disaster recovery Networking and security expertise Knowledge on PowerShell, Shell and Python Associate/Professional level certification on AWS solution architecture ITIL Foundational certification Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Understanding of container technologies such as Docker, Kubernetes and OpenShift. Understanding of Application and other infrastructure monitoring tools Understanding of end-to-end infrastructure landscape Experience on virtualization platform Knowledge on Chef, Puppet, Bamboo, Concourse etc Knowledge on Microservices, DataLake, Machine learning etc Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with AWS, System administration, IaC etc Location: Hyderabad/ Pune The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302308 Show more Show less

Posted 3 days ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Gurugram

Remote

Naukri logo

Candidates can share CV at aishwarya.joshi@espire.com Role Description: The ideal candidate will have a strong background in implementation, deployment, maintenance, and monitoring of Azure infrastructure. This role requires a hands-on expert who can lead complex projects, troubleshoot critical issues, and ensure the smooth operation of Azure-based environments. Relevant Experience: Azure cloud, IAAS, PAAS, Azure Devops, Terraform, Kubernetes Key Responsibilities: 1. Azure Landing Zone • Design and implement Azure Landing Zones to establish a scalable and secure foundation for cloud adoption. • Configure Azure Resource Groups, Policies, and Role-Based Access Control (RBAC) to align with organizational governance. • Deploy and manage networking components, including Virtual Networks, Subnets, and Network Security Groups (NSGs). • Establish connectivity between on-premises and cloud environments using Azure ExpressRoute or VPN Gateway. • Incorporate management groups and subscriptions to create a modular and consistent environment. 2. Automation • Develop Infrastructure as Code (IaC) templates using Terraform, Azure Resource Manager (ARM), or Bicep. • Automate routine maintenance tasks such as backups, patching, and scaling using Azure Automation or Logic Apps. • Implement deployment pipelines for continuous integration and delivery (CI/CD) with Azure DevOps or GitHub Actions. • Schedule and automate cost optimization tasks, including resource cleanup and tagging enforcement. • Leverage Azure Functions to streamline serverless operations for event-driven workflows. 3. Monitoring and Log Management • Configure Azure Monitor to collect and analyse metrics for performance and health monitoring. • Implement Azure Log Analytics and Kusto Query Language (KQL) for centralized log aggregation and analysis. • Set up Application Insights for end-to-end performance monitoring and diagnostics of applications. • Establish alerting mechanisms for proactive identification and resolution of issues. • Ensure compliance by implementing and maintaining audit logs with Azure Policy and Security Centre

Posted 3 days ago

Apply

8.0 - 12.0 years

15 - 29 Lacs

India

On-site

GlassDoor logo

Job Title: Technical Lead Experience: 8 to 12 Years Location: Chennai Domain : BFSI Job Summary: We are seeking a versatile and highly skilled Senior Software Engineer with expertise in full stack development, mobile application development using Flutter, and backend systems using Java/Spring Boot. The ideal candidate will have strong experience across modern development stacks, cloud platforms (AWS), containerization, and CI/CD pipelines. Key Responsibilities: Design and develop scalable web, mobile, and backend applications. Build high-quality, performant cross-platform mobile apps using Flutter and Dart. Develop RESTful APIs and services using Node.js/Express and Java/Spring Boot. Integrate frontend components with backend logic and databases (Oracle, PostgreSQL, MongoDB). Work with containerization tools like Docker and orchestration platforms like Kubernetes or ROSA. Leverage AWS cloud services for deployment, scalability, and monitoring (e.g., EC2, S3, RDS, Lambda). Collaborate with cross-functional teams including UI/UX, QA, DevOps, and product managers. Participate in Agile ceremonies, code reviews, unit/integration testing, and performance tuning. Maintain secure coding practices and ensure compliance with security standards. Required Skills & Qualifications: Strong programming in Java (Spring Boot), Node.js, and React.js. Proficiency in Flutter & Dart for mobile development. Experience with REST APIs, JSON, and third-party integrations. Hands-on experience with cloud platforms (preferably AWS). Strong skills in databases such as Oracle, PostgreSQL, MongoDB. Experience with Git, CI/CD tools (Jenkins, GitLab CI, GitHub Actions). Familiarity with containerization using Docker and orchestration via Kubernetes. Knowledge of secure application development (OAuth, JWT, encryption). Solid understanding of Agile/Scrum methodologies. Preferred Qualifications: Experience with Firebase, messaging queues (Kafka/RabbitMQ), and server-side rendering (Next.js). Familiarity with DevOps practices, infrastructure as code (Terraform/CloudFormation), and observability tools (Prometheus, ELK). Exposure to platform-specific integrations for Android/iOS through native channels. Understanding of App Store / Play Store deployment. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field, or equivalent practical experience. Job Types: Full-time, Permanent Pay: ₹1,575,371.85 - ₹2,989,972.99 per year Benefits: Health insurance Provident Fund Schedule: Morning shift Supplemental Pay: Performance bonus Yearly bonus Application Question(s): How many years of experience in Java? How many years of experience in Flutter? How many years of experience in CRM Model ? What's your notice period ? Work Location: In person

Posted 3 days ago

Apply

0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Here at Appian, our core values of Respect, Work to Impact, Ambition, and Constructive Dissent & Resolution define who we are. In short, this means we constantly seek to understand the best for our customers, we go beyond completion in our work, we strive for excellence with intensity, and we embrace candid communication. These values guide our actions and shape our culture every day. When you join Appian, you'll be part of a passionate team that's dedicated to accomplishing hard things. As a DevOps & Test Infrastructure Engineer your goal is to design, implement, and maintain a robust, scalable, and secure AWS infrastructure to support our growing testing needs. You will be instrumental in building and automating our DevOps pipeline, ensuring efficient and reliable testing processes. This role offers the opportunity to shape our performance testing environment and contribute directly to the quality and speed of our clients' Appian software delivery. Responsibilities Architecture Design: Design and architect a highly scalable and cost-effective AWS infrastructure tailored for testing purposes, considering security, performance, and maintainability. DevOps Pipeline Design: Architect a secure and automated DevOps pipeline on AWS, integrating tools such as Jenkins for continuous integration/continuous delivery (CI/CD) and Locust for performance testing. Infrastructure as Code (IaC): Implement infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation to enable automated deployment and scaling of the testing environment. Security Implementation: Implement and enforce security best practices across the AWS infrastructure and DevOps pipeline, ensuring compliance and protecting sensitive data. Jenkins or similar CI/CD automation platforms Configuration & Administration: Install, configure, and administer Jenkins, including setting up build pipelines, managing plugins, and ensuring its scalability and reliability. Locust Configuration & Administration: Install, configure, and administer Locust for performance and load testing. Automation: Automate the deployment, scaling, and management of all infrastructure components and the DevOps pipeline. Monitoring and Logging: Implement comprehensive monitoring and logging solutions to proactively identify and resolve issues within the testing environment, including also exposing testing results available for consumption. Troubleshooting and Support: Provide expert-level troubleshooting and support for the testing infrastructure and DevOps pipeline. Collaboration: Work closely with development, QA, and operations teams to understand their needs and provide effective solutions. Documentation: Create and maintain clear and concise documentation for the infrastructure, pipeline, and processes. Continuous Improvement: Stay up-to-date with the latest AWS services and DevOps best practices, and proactively identify opportunities for improvement. Qualifications Proven experience in designing and implementing scalable architectures on Amazon Web Services (AWS). Strong understanding of DevOps principles and practices. Hands-on experience with CI/CD tools, for example Jenkins, including pipeline creation and administration. Experience with performance testing tools, preferably Locust, including test design and execution. Proficiency in infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Solid understanding of security best practices in cloud environments. Experience with containerization technologies like Docker and orchestration tools like Kubernetes or AWS ECS (preferred). Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack, CloudWatch). Excellent scripting skills (e.g., Python, Bash). Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. AWS certifications (e.g., AWS Certified Solutions Architect – Associate/Professional, AWS Certified DevOps Engineer – Professional). Experience with other testing tools and frameworks. Experience with agile development methodologies. Education B.S. in Computer Science, Engineering, Information Systems, or related field. Working Conditions Opportunity to work on enterprise-scale applications across different industries. This role is based at our office at WTC 11th floor, Old Mahabalipuram Road, SH 49A, Kandhanchavadi, Kottivakkam, Chennai, Tamil Nadu 600041, India. Appian was built on a culture of in-person collaboration, which we believe is a key driver of our mission to be the best. Employees hired for this position are expected to be in the office 5 days a week to foster that culture and ensure we continue to thrive through shared ideas and teamwork. We believe being in the office provides more opportunities to come together and celebrate working with the exceptional people across Appian. Tools and Resources Training and Development: During onboarding, we focus on equipping new hires with the skills and knowledge for success through department-specific training. Continuous learning is a central focus at Appian, with dedicated mentorship and the First-Friend program being widely utilized resources for new hires. Growth Opportunities: Appian provides a diverse array of growth and development opportunities, including our leadership program tailored for new and aspiring managers, a comprehensive library of specialized department training through Appian University, skills based training, and tuition reimbursement for those aiming to advance their education. This commitment ensures that employees have access to a holistic range of development opportunities. Community: We'll immerse you into our community rooted in respect starting on day one. Appian fosters inclusivity through our 8 employee-led affinity groups. These groups help employees build stronger internal and external networks by planning social, educational, and outreach activities to connect with Appianites and larger initiatives throughout the company. About Appian Appian is a software company that automates business processes. The Appian AI-Powered Process Platform includes everything you need to design, automate, and optimize even the most complex processes, from start to finish. The world's most innovative organizations trust Appian to improve their workflows, unify data, and optimize operations—resulting in better growth and superior customer experiences. For more information, visit appian.com. [Nasdaq: APPN] Follow Appian: Twitter, LinkedIn. Appian is an equal opportunity employer that strives to attract and retain the best talent. All qualified applicants will receive consideration for employment without regard to any characteristic protected by applicable federal, state, or local law. Appian provides reasonable accommodations to applicants in accordance with all applicable laws. If you need a reasonable accommodation for any part of the employment process, please contact us by email at ReasonableAccommodations@appian.com . Please note that only inquiries concerning a request for reasonable accommodation will be responded to from this email address. Appian's Applicant & Candidate Privacy Notice

Posted 3 days ago

Apply

4.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Job Highlights: We are seeking a skilled and passionate individual with a minimum of 4 years as a Solution Architect with expertise in full-stack development and a deep understanding of AWS cloud technologies. The ideal candidate will have hands-on experience across a range of backend and frontend technologies and be adapt at designing secure, scalable, and high-performance cloud-native applications. Managed end-to-end hosting of both frontend and backend components, ensuring 360-degree application support and performance optimization as a Solutions Architect. If you're driven by innovation and eager to contribute to digital transformation initiatives, we invite you to apply. Key Responsibilities: Design, develop, and deploy web and mobile applications using Spring Boot, Java, Python, Node.js (backend), Angular, Ionic, TypeScript, HTML5, CSS3, Bootstrap, React, React Native (frontend), MEAN and MERN, MongoDB Architect and manage cloud solutions using AWS and Microsoft Azure services such as EC2, Lambda, S3, RDS, ECS, etc. other cloud technologies will be an added advantage Develop and manage CI/CD pipelines using GitLab to automate testing and deployment. Automate infrastructure provisioning and management using Terraform (IaC). Build modern, responsive UIs with Bootstrap and Ionic. Implement best practices for cloud security, performance optimization, and monitoring. Manage end-to-end database operations, including schema design, query optimization, indexing, and security. Work closely with stakeholders to shape cloud strategy, evaluate modernization opportunities, and present business cases. Create technical documentation and deliver effective presentations for internal and client-facing teams. Lead the integration of AI/ML models and Large Language Models (LLMs) into real-world applications. Collaborate with cross-functional teams, including UI/UX, DevOps, QA, and Data Science. Managing the team and handled a minimum 5 projects. Strong hands-on experience in: Spring Boot, Angular, Ionic, Bootstrap, Flutter AWS Core Services (EC2, S3, Lambda, RDS, ECS, IAM, etc.), Microsoft Azure Terraform for infrastructure automation CI/CD pipelines (preferably GitLab) Database systems: MySQL, PostgreSQL, DynamoDB, MongoDB, etc. Testing Tools: Swagger (OpenAPI), Postman Preferred Qualifications: Bachelor's or Master's degree in Computer Science/Engineering or equivalent. Strong leadership skills and experience mentoring tech teams. Clear understanding of microservices, RESTful APIs, and security protocols. Experience with containerization (Docker, Kubernetes) is a plus. Certifications in AWS, Azure, or relevant fields are preferred. Job Types: Full-time, Internship Benefits: Paid time off Schedule: Day shift Education: Master's (Preferred) Experience: Solution architect: 4 years (Preferred) Work Location: In person

Posted 3 days ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description: We are seeking a highly skilled and motivated Google Cloud Engineer to join our dynamic engineering team. In this role, you will be instrumental in designing, building, deploying, and maintaining our cloud infrastructure and applications on Google Cloud Platform (GCP). You will work closely with development, operations, and security teams to ensure our cloud environment is scalable, secure, highly available, and cost-optimized. If you are passionate about cloud native technologies, automation, and solving complex infrastructure challenges, we encourage you to apply.. What You Will Do Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP) using Infrastructure as Code (IaC) tools like Terraform. Deploy, configure, and manage core GCP services such as Compute Engine, Kubernetes Engine (GKE), Cloud SQL, Cloud Storage, Cloud Functions, BigQuery, Pub/Sub, and networking components (VPC, Cloud Load Balancing, Cloud CDN). Develop and maintain CI/CD pipelines for automated deployment and release management using tools like Cloud Build, GitLab CI/CD, GitHub Actions or Jenkins. Implement and enforce security best practices within the GCP environment, including IAM, network security, data encryption, and compliance adherence. Monitor cloud infrastructure and application performance, identify bottlenecks, and implement solutions for optimization and reliability. Troubleshoot and resolve complex infrastructure and application issues in production and non-production environments. Collaborate with development teams to ensure applications are designed for cloud-native deployment, scalability, and resilience. Participate in on-call rotations for critical incident response and provide timely resolution to production issues. Create and maintain comprehensive documentation for cloud architecture, configurations, and operational procedures. Stay current with new GCP services, features, and industry best practices, proposing and implementing improvements as appropriate. Contribute to cost optimization efforts by identifying and implementing efficiencies in cloud resource utilization. What Experience You Need Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 6+ years of hands-on experience with C#, .NET Core, .NET Framework, MVC, Web API, Entity Framework, and SQL Server. 3+ years of experience with cloud platforms (GCP preferred), including designing and deploying cloud-native applications. 3+ years of experience with source code management (Git), CI/CD pipelines, and Infrastructure as Code. Strong experience with Javascript and a modern Javascript framework, VueJS preferred. Proven ability to lead and mentor development teams. Strong understanding of microservices architecture and serverless computing. Experience with relational databases (SQL Server, PostgreSQL). Excellent problem-solving, analytical, and communication skills. Experience working in Agile/Scrum environments. What Could Set You Apart GCP Cloud Certification. UI development experience (e.g., HTML, JavaScript, Angular, Bootstrap) Experience in Agile environments (e.g., Scrum, XP) Relational database experience (e.g., SQL Server, PostgreSQL) Experience with Atlassian tooling (e.g., JIRA, Confluence, and Github) Working knowledge of Python Excellent problem-solving and analytical skills and the ability to work well in a team Show more Show less

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes Show more Show less

Posted 3 days ago

Apply

8.0 years

28 - 30 Lacs

Chennai

On-site

GlassDoor logo

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 3 days ago

Apply

0 years

0 - 0 Lacs

Coimbatore

On-site

GlassDoor logo

We are seeking AWS Cloud DEVOPS Engineers, who will be part of the Engineering team and collaborating with software development, quality assurance, and IT operations teams to deploy and maintain production systems in the cloud. This role requires a engineer who is passionate about provisioning and maintaining a reliable, secure, and scalable production systems. We are a small team of highly skilled engineers and looking forward to adding a new member who wishes to advance in one's career by continuous learning. Selected candidates will be an integral part of a team of passionate and enthusiastic IT professionals, and have tremendous opportunities to contribute to the success of the products. What you will do Ideal candidate will be responsible for Deploying, automating, maintaining, managing and monitoring an AWS production system including software applications and cloud-based infrastructure Monitor system performance and troubleshoot issues Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS ) Use DevOps principles and methodologies to enable the rapid deployment of software and services by coordinating software development, quality assurance, and IT operations Making sure AWS production systems are reliable, secure, and scalable Create and enforce policies related to AWS usage including sample tagging, instance type usage, data storage Resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques Automating different operational processes by designing, maintaining, and managing tools Provide primary operational support and engineering for all Cloud and Enterprise deployments Lead the organisations platform security efforts by collaborating with the core engineering team Design, build, and maintain containerization using Docker, and manage container orchestration with Kubernetes Set up monitoring, alerting, and logging tools (e.g., zabbix) to ensure system reliability. Collaborate with development, QA, and operations teams to design and implement CI/CD pipelines with Jenkins Develop policies, standards, and guidelines for IAC and CI/CD that teams can follow Automate and optimize infrastructure tasks using tools like Terraform, Ansible, or CloudFormation. Support InfoSec scans and compliance audits Ensure security best practices in the cloud environment, including IAM management, security groups, and network firewalls. Contribute sto the optimization of system performance and cost. Promotes knowledge sharing activities within and across different product teams by creating and engaging in communities of practice and through documentation, training, and mentoring Keep skills up to date through ongoing self-directed training What skills are required Ability to learn new technologies quickly. Ability to work both independently and in collaborative teams to communicate design and build ideas effectively. Problem-solving, and critical-thinking skills including ability to organize, analyze, interpret, and disseminate information. Excellent spoken and written communication skills Must be able to work as part of a diverse team, as well as independently Ability to follow departmental and organizational processes and meet established goals and deadlines Knowledge of EC2 (Auto scaling, Security Groups ),VPC,SQS, SNS,Route53,RDS, S3, Elastic Cache, IAM, CLI Server setup/configuration (Tomcat,ngnix ) Experience with AWS—including EC2, S3, CloudTrail, and APIs Solid understanding of EC2 On-Demand, Spot Market, and Reserved Instances knowledge of Infrastructure As Code tools including Terraform, Ansible, or CloudFormation Knowledge of scripting and automation using Python, Bash, Perl to automate AWS tasks Knowledge of code deployment tools Ansible and CloudFormation scripts Support InfoSec scans and compliance audits Basic knowledge of network architecture, DNS, and load balancing. knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes Understanding of monitoring and logging tools(e.g. zabbix) Familiarity with version control systems (GIT) Knowledge of microservices architecture and deployment. Bachelor's degree in Engineering or Masters degree in computer science. Note : Candidates who have passed out in the year 2023 or 2024 can only apply for this Internship. This is Internship to Hire position and Candidates who complete the internship will be offered full-time position based on performance Job Types: Full-time, Permanent, Fresher, Internship Contract length: 6 months Pay: ₹5,500.00 - ₹7,000.00 per month Schedule: Day shift Monday to Friday Morning shift Expected Start Date: 01/07/2025

Posted 3 days ago

Apply

5.0 years

3 - 7 Lacs

Ahmedabad

On-site

GlassDoor logo

Location: Ahmedabad / Pune Required Experience: 5+ Years Preferred Immediate Joiner We are looking for a highly skilled Lead Data Engineer (Snowflake) to join our team. The ideal candidate will have extensive experience Snowflake, and cloud platforms, with a strong understanding of ETL processes, data warehousing concepts, and programming languages. If you have a passion for working with large datasets, designing scalable database schemas, and solving complex data problems. Key Responsibilities: Design, implement, and optimize data pipelines and workflows using Apache Airflow Develop incremental and full-load strategies with monitoring, retries, and logging Build scalable data models and transformations in dbt, ensuring modularity, documentation, and test coverage Develop and maintain data warehouses in Snowflake Ensure data quality, integrity, and reliability through validation frameworks and automated testing Tune performance through clustering keys, warehouse scaling, materialized views, and query optimization. Monitor job performance and resolve data pipeline issues proactively Build and maintain data quality frameworks (null checks, type checks, threshold alerts). Partner with data analysts, scientists, and business stakeholders to translate reporting and analytics requirements into technical specifications. Required Skills & Qualifications: Snowflake (data modeling, performance tuning, access control, external tables, streams & tasks) Apache Airflow (DAG design, task dependencies, dynamic tasks, error handling) dbt (Data Build Tool) (modular SQL development, jinja templating, testing, documentation) Proficiency in SQL, Spark and Python Experience building data pipelines on cloud platforms like AWS, GCP, or Azure Strong knowledge of data warehousing concepts and ELT best practices Familiarity with version control systems (e.g., Git) and CI/CD practices Familiarity with infrastructure-as-code tools like Terraform for provisioning Snowflake or Airflow environments. Excellent problem-solving skills and the ability to work independently. Perks: Flexible Timings 5 Days Working Healthy Environment Celebration Learn and Grow Build the Community Medical Insurance Benefit

Posted 3 days ago

Apply

3.0 - 5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

Remote

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. The opportunity We are looking for a skilled Cloud DevOps Engineer with expertise in both AWS and Azure platforms. This role is responsible for end-to-end DevOps support, infrastructure automation, CI/CD pipeline troubleshooting, and incident resolution across cloud environments. The role will handle escalations, lead root cause analysis, and collaborate with engineering and infrastructure teams to deliver high-availability services. You will also contribute to enhancing runbooks, SOPs, and mentoring junior engineers Your Key Responsibilities Act as a primary escalation point for DevOps-related and infrastructure-related incidents across AWS and Azure. Provide troubleshooting support for CI/CD pipeline issues, infrastructure provisioning, and automation failures. Support containerized application environments using Kubernetes (EKS/AKS), Docker, and Helm. Create and refine SOPs, automation scripts, and runbooks for efficient issue handling. Perform deep-dive analysis and RCA for recurring issues and implement long-term solutions. Handle access management, IAM policies, VNet/VPC setup, security group configurations, and load balancers. Monitor and analyze logs using AWS CloudWatch, Azure Monitor, and other tools to ensure system health. Collaborate with engineering, cloud platform, and security teams to maintain stable and secure environments. Mentor junior team members and contribute to continuous process improvements. Skills And Attributes For Success Hands-on experience with CI/CD tools like GitHub Actions, Azure DevOps Pipelines, and AWS CodePipeline. Expertise in Infrastructure as Code (IaC) using Terraform; good understanding of CloudFormation and ARM Templates. Familiarity with scripting languages such as Bash and Python. Deep understanding of AWS (EC2, S3, IAM, EKS) and Azure (VMs, Blob Storage, AKS, AAD). Container orchestration and management using Kubernetes, Helm, and Docker. Experience with configuration management and automation tools such as Ansible. Strong understanding of cloud security best practices, IAM policies, and compliance standards. Experience with ITSM tools like ServiceNow for incident and change management. Strong documentation and communication skills. To qualify for the role, you must have 3 to 5 years of experience in DevOps, cloud infrastructure operations, and automation. Hands-on expertise in AWS and Azure environments. Proficiency in Kubernetes, Terraform, CI/CD tooling, and automation scripting. Experience in a 24x7 rotational support model. Relevant certifications in AWS and Azure (e.g., AWS DevOps Engineer, Azure Administrator Associate). Technologies and Tools Must haves Cloud Platforms: AWS, Azure CI/CD & Deployment: GitHub Actions, Azure DevOps Pipelines, AWS CodePipeline Infrastructure as Code: Terraform Containerization: Kubernetes (EKS/AKS), Docker, Helm Logging & Monitoring: AWS CloudWatch, Azure Monitor Configuration & Automation: Ansible, Bash Incident & ITSM: ServiceNow or equivalent Certification: AWS and Azure relevant certifications Good to have Cloud Infrastructure: CloudFormation, ARM Templates Security: IAM Policies, Role-Based Access Control (RBAC), Security Hub Networking: VPC, Subnets, Load Balancers, Security Groups (AWS/Azure) Scripting: Python/Bash Observability: OpenTelemetry, Datadog, Splunk Compliance: AWS Well-Architected Framework, Azure Security Center What We Look For Enthusiastic learners with a passion for cloud technologies and DevOps practices. Problem solvers with a proactive approach to troubleshooting and optimization. Team players who can collaborate effectively in a remote or hybrid work environment. Detail-oriented professionals with strong documentation skills. What We Offer EY Global Delivery Services (GDS) is a dynamic and truly global delivery network. We work across six locations – Argentina, China, India, the Philippines, Poland and the UK – and with teams from all EY service lines, geographies and sectors, playing a vital role in the delivery of the EY growth strategy. From accountants to coders to advisory consultants, we offer a wide variety of fulfilling career opportunities that span all business disciplines. In GDS, you will collaborate with EY teams on exciting projects and work with well-known brands from across the globe. We’ll introduce you to an ever-expanding ecosystem of people, learning, skills and insights that will stay with you throughout your career. Continuous learning: You’ll develop the mindset and skills to navigate whatever comes next. Success as defined by you: We’ll provide the tools and flexibility, so you can make a meaningful impact, your way. Transformative leadership: We’ll give you the insights, coaching and confidence to be the leader the world needs. Diverse and inclusive culture: You’ll be embraced for who you are and empowered to use your voice to help others find theirs. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 days ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About Our Client: Distinguished Founders | Team Encultured: Transparent, "No-Heroes", Shared Ownership, Continuous Learning | Unique Serial Entrepreneurs | Multiple High-Value World Famous Exits The Role: You will lead the development of a high-scale AI Prediction Platform powering critical decisions. You will lead engineering for a data-intensive product - owning architecture, team growth, and platform scalability. What You’ll Own End-to-end development of the AI Prediction Platform-architecture, code quality, performance, and system integration. Direct management of a 10–15 member engineering team (scaling to ~20). Set direction, grow leaders, and foster a high-performance culture rooted in shared ownership and transparency. Translate business priorities into robust technical execution across product, design, and data functions (North America + India). Serve as the technical face of engineering internally and externally-owning escalations, technical positioning, and stakeholder trust. Technical Scope Tech Stack: React (TypeScript), FastAPI, Python, Databricks, Dagster, Terraform, AWS, dltHub, Nixtla, LangChain/LangGraph. Tools & Standards: Jest, Playwright, Pytest, Azure DevOps, Docker, Checkov, SonarCloud. Deep experience with full-stack engineering, distributed systems, and scalable data pipelines is essential. Hands-on background with modern SaaS architecture, TDD, and infrastructure as code. What We’re Looking For 10+ years of engineering experience with 5+ years leading engineering teams or teams-of-teams. Proven success building complex B2B or enterprise SaaS products at scale. Strong recent hands-on experience (Python, SQL, React, etc.) with architectural and production ownership. Experience managing and growing distributed engineering teams. Deep understanding of system design, DevOps culture, and AI/ML-enabled platforms. Strong cross-functional leadership with clarity in communication and execution alignment. Write to sanish@careerxperts.com to get connected! Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About American Airlines: To Care for People on Life's Journey®. Together with our American Eagle regional partners, we offer thousands of flights daily to more than 350 destinations in more than 60 countries. American Airlines is transforming the way it delivers technology to its customers and team members worldwide. American’s Tech Hub in Hyderabad, India, is our latest technology office location and home to team members who drive technical innovation and engineer unrivalled digital products to best serve American’s customers and team members. With U.S. tech hubs in Dallas-Fort Worth, Texas and Phoenix, Arizona, our new team in Hyderabad, India enables better support of our 24/7 operation and positions American to deliver industry-leading technology solutions that create a world-class customer experience. Cloud Engineering What you'll do: As noted above, this list is intended to reflect the current job but there may be additional essential functions (and certainly non-essential job functions) that are not referenced. Management will modify the job or require other tasks be performed whenever it is deemed appropriate to do so, observing, of course, any legal obligations including any collective bargaining obligations. Be a part of Business Intelligence Platform team and ensure all our systems up and running and performing optimally- Cognos, PowerBI, Tableau, Alteryx and Grafana. Support automation of Platform Infrastructure related processes using PowerShell, Python, other tools to help platform stability and scalability. Perform troubleshooting of platform related issues and other complex issues with cloud BI solutions. Windows & Linux servers, IIS, Application Gateways, Firewall and Networks, Complex SQL, etc. Perform multiple aspects involved in the development lifecycle – design, cloud engineering (Infrastructure, network, security, and administration, data modeling, testing, performance tuning, deployments, consumption, BI, alerting, prod support. Provide technical leadership and collaborate within a team environment as well as work independently. Be a part of a DevOps team that completely owns and supports their product. Leads development of coding standards, best practices and privacy and security guidelines. Make sure the systems are security compliant and patched as per Cybersecurity guidelines All you'll need for success: Minimum Qualifications - Education & Prior Job Experience: Bachelor’s degree in computer science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 3 years business intelligence development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions. 3 years data analytics experience using SQL. 2 years of cloud development and data lake experience (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Data Lake, Azure Power Apps and Power BI. Combination of Development, Administration & Support experience in several of the following tools/platforms required: Scripting: Python, SQL, PowerShell Basic Azure Infrastructure Experience: Servers, Networking, Firewall, Storage Account, App Gateways etc. CI/CD: GitHub, Azure DevOps, Terraform BI Analytics Tool Administration on anyone of the platforms - Cognos, Tableau, Power BI, Alteryx Preferred Qualifications - Education & Prior Job Experience: 3+ years data analytics experience specifically in Business Intelligence Development, Requirements gathering and training end users. 3+ years administering data platforms (Tableau or Cognos or Power BI) at scale. 3+ years analytics solution development using agile, dev ops, product model that includes designing, developing, and implementing large-scale applications or data engineering solutions. Airline Industry Experience Skills, Licenses & Certifications: Certification in any BI tools - Administration Expertise with the Azure Technology stack for data management, data ingestion, capture, processing, curation and creating consumption layers. Expertise in providing practical direction within the Azure Native cloud services. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Noida

On-site

GlassDoor logo

About the Role HashiCorp is looking for a high-caliber customer facing engineering professional to join its Support Engineering team in Noida, India. This is an exciting opportunity to join a small team and have a direct impact on HashiCorp's fast growing business. This highly visible position will be an integral part of both the support engineering and Terraform Open Source/Enterprise teams. You are a fit if you thrive in a fast-paced culture that values essential communication, collaboration, and results. You are a self-motivated, detail-oriented individual with an eye for automation, process improvement, and problem solving. Reporting to the Manager, Support Engineering, the Support Engineer will be a key member of the Customer Success organization and will directly impact customer satisfaction and success. The Support engineer will troubleshoot complex issues related to Terraform Enterprise and independently work to find viable solutions. They will contribute to product growth and development via weekly product and marketing meetings. The Support Engineer will attend customer meetings as needed to help identify, debug and resolve the customer issue and is expected to be a liaison between the customer and HashiCorp engineering. When possible the Support Engineer will update and improve product documentation, guide feature development, and implement bug fixes based on customer feedback. RESPONSIBILITIES Triage and solve incoming support requests via Zendesk within SLA Document and record all activity and communication with customers in accordance to both internal and external security standards Reproduce and debug customer issues by building or using existing tooling or configurations Collaborate with engineers, sales engineers, sales representatives, and technical account managers to schedule, coordinate, and lead customer installs or debugging calls Contribute to create knowledge base articles, and best practices guides Continuously improve process and tools for normal, repetitive support tasks Periodic on-call rotation for production-down issues Weekly days off scheduled every week on rotation on any day of the week REQUIREMENTS 4+ years Support Engineering, Software Engineering, or System Administration experience Expertise in Open Source and SaaS is a major advantage Excellent presence; strong written and verbal communication skills Upbeat, passionate, and unparalleled customer focus Well-organized, has excellent work ethic, pays attention to detail, and self-starting Experience managing and influencing change in organizations Working knowledge with Docker, Kubernetes Familiar with networking concept Experience developing a program, script, or tool that was released or used is an advantage Strong understanding of Linux or Windows command line environments Interest in cloud adoption and technology at scale Goals : 30 days: you should be able to - Write a simple TF configuration and apply it in TFE to deploy infrastructure Holistic understanding of (P)TFE and the interaction with the TF ecosystem Successfully perform all common work flows within Terraform Enterprise One contribution to extend or improve product documentation or install guides Ability to answer Level 1 support inquiries with minimal assistance 60 days: you should be able to - Effectively triage and respond to Level 1 & 2 inquiries independently Provision and bootstrap (P)TFE instance with low-touch from engineering Ride along on 1-2 live customer install calls Locate and unpack the customer log files. Familiarity with its contents Apply TF configurations to deploy infrastructure in AWS, Azure, and Google Cloud Author one customer knowledge base article from area of subject matter expertise 90 days: you should be able to - Effectively triage and respond to a production down issue with minimal assistance Run point on a live customer install without assistance Independently find points of error and identify root cause in the customer log files and report relevant details to engineering Implement small bug fixes or feature improvements Reproduce a TF bug or error by creating a suitable configuration EDUCATION Bachelor's degree in Computer Science, IT, Technical Writing, or equivalent professional experience #LI-Hybrid #LI-SG1 "HashiCorp is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. HashiCorp will be the hiring entity. By proceeding with this application you understand that HashiCorp will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: link to IBM privacy statement ."

Posted 3 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Global Data Insight & Analytics organization is looking for a top-notch Software Engineer who has also got Machine Learning knowledge & Experience to add to our team to drive the next generation of AI/ML (Mach1ML) platform. In this role you will work in a small, cross-functional team. The position will collaborate directly and continuously with other engineers, business partners, product managers and designers from distributed locations, and will release early and often. The team you will be working on is focused on building Mach1ML platform – an AI/ML enablement platform to democratize Machine Learning across Ford enterprise (like OpenAI’s GPT, Facebook’s FBLearner, etc.) to deliver next-gen analytics innovation. We strongly believe that data has the power to help create great products and experiences which delight our customers. We believe that actionable and persistent insights, based on high quality data platform, help business and engineering make more impactful decisions. Our ambitions reach well beyond existing solutions, and we are in search of innovative individuals to join this Agile team. This is an exciting, fast-paced role which requires outstanding technical and organization skills combined with critical thinking, problem-solving and agile management tools to support team success. Responsibilities What you'll be able to do: As a Software Engineer, you will work on developing features for Mach1ML platform, support customers in model deployment using Mach1ML platform on GCP and On-prem. You will follow Rally to manage your work. You will incorporate an understanding of product functionality and customer perspective for model deployment. You will work on the cutting-edge technologies such as GCP, Kubernetes, Docker, Seldon, Tekton, Airflow, Rally, etc. Position Responsibilities: Work closely with Tech Anchor, Product Manager and Product Owner to deliver machine learning use cases using Ford Agile Framework. Work with Data Scientists and ML engineers to tackle challenging AI problems. Work specifically on the Deploy team to drive model deployment and AI/ML adoption with other internal and external systems. Help innovate by researching state-of-the-art deployment tools and share knowledge with the team. Lead by example in use of Paired Programming for cross training/upskilling, problem solving, and speed to delivery. Leverage latest GCP, CICD, ML technologies Critical Thinking: Able to influence the strategic direction of the company by finding opportunities in large, rich data sets and crafting and implementing data driven strategies that fuel growth including cost savings, revenue, and profit. Modelling: Assessments, and evaluating impacts of missing/unusable data, design and select features, develop, and implement statistical/predictive models using advanced algorithms on diverse sources of data and testing and validation of models, such as forecasting, natural language processing, pattern recognition, machine vision, supervised and unsupervised classification, decision trees, neural networks, etc. Analytics: Leverage rigorous analytical and statistical techniques to identify trends and relationships between different components of data, draw appropriate conclusions and translate analytical findings and recommendations into business strategies or engineering decisions - with statistical confidence Data Engineering: Experience with crafting ETL processes to source and link data in preparation for Model/Algorithm development. This includes domain expertise of data sets in the environment, third-party data evaluations, data quality Visualization: Build visualizations to connect disparate data, find patterns and tell engaging stories. This includes both scientific visualization as well as geographic using applications such as Seaborn, Qlik Sense/PowerBI/Tableau/Looker Studio, etc. Qualifications Minimum Requirements we seek: Bachelor’s or master’s degree in computer science engineering or related field or a combination of education and equivalent experience. 3+ years of experience in full stack software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google cloud build, cloud run, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Experience in HuggingFace, Chainlit, Streamlit, React Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Our Preferred Requirements: Master’s degree in computer science engineering, or related field or a combination of education and equivalent experience. Demonstrated successful application of analytical methods and machine learning techniques with measurable impact on product/design/business/strategy. Proficiency in programming languages such as Python with a strong emphasis on machine learning libraries, generative AI frameworks, and monitoring tools. Utilize tools and technologies such as TensorFlow, PyTorch, scikit-learn, and other machine learning libraries to build and deploy machine learning solutions on cloud platforms. Design and implement cloud infrastructure using technologies such as Kubernetes, Terraform, and Tekton to support scalable and reliable deployment of machine learning models, generative AI models, and applications. Integrate machine learning and generative AI models into production systems on cloud platforms such as Google Cloud Platform (GCP) and ensure scalability, performance, and proactive monitoring. Implement monitoring solutions to track the performance, health, and security of systems and applications, utilizing tools such as Prometheus, Grafana, and other relevant monitoring tools. Conduct code reviews and provide constructive feedback to team members on machine learning-related projects. Knowledge and experience in agentic workflow based application development and DevOps Stay up to date with the latest trends and advancements in machine learning and data science. Show more Show less

Posted 3 days ago

Apply

5.0 - 7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

You are passionate about quality and how customers experience the products you test. You have the ability to create, maintain and execute test plans in order to verify requirements. As a Quality Engineer at Equifax, you will be a catalyst in both the development and the testing of high priority initiatives. You will develop and test new products to support technology operations while maintaining exemplary standards. As a collaborative member of the team, you will deliver QA services (code quality, testing services, performance engineering, development collaboration and continuous integration). You will conduct quality control tests in order to ensure full compliance with specified standards and end user requirements. You will execute tests using established plans and scripts; documents problems in an issues log and retest to ensure problems are resolved. You will create test files to thoroughly test program logic and verify system flow. You will identify, recommend and implement changes to enhance effectiveness of QA strategies. What You Will Do Independently develop scalable and reliable automated tests and frameworks for testing software solutions. Specify and automate test scenarios and test data for a highly complex business by analyzing integration points, data flows, personas, authorization schemes and environments Develop regression suites, develop automation scenarios, and move automation to an agile continuous testing model. Pro-actively and collaboratively taking part in all testing related activities while establishing partnerships with key stakeholders in Product, Development/Engineering, and Technology Operations. What Experience You Need Bachelor's degree in a STEM major or equivalent experience 5-7 years of software testing experience Able to create and review test automation according to specifications Ability to write, debug, and troubleshoot code in Java, Springboot, TypeScript/JavaScript, HTML, CSS Creation and use of big data processing solutions using Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others with respect to software validation Created test strategies and plans Led complex testing efforts or projects Participated in Sprint Planning as the Test Lead Collaborated with Product Owners, SREs, Technical Architects to define testing strategies and plans. Design and development of micro services using Java, Springboot, GCP SDKs, GKE/Kubeneties Deploy and release software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Cloud Certification Strongly Preferred What Could Set You Apart An ability to demonstrate successful performance of our Success Profile skills, including: Attention to Detail - Define test case candidates for automation that are outside of product specifications. i.e. Negative Testing; Create thorough and accurate documentation of all work including status updates to summarize project highlights; validating that processes operate properly and conform to standards Automation - Automate defined test cases and test suites per project Collaboration - Collaborate with Product Owners and development team to plan and and assist with user acceptance testing; Collaborate with product owners, development leads and architects on functional and non-functional test strategies and plans Execution - Develop scalable and reliable automated tests; Develop performance testing scripts to assure products are adhering to the documented SLO/SLI/SLAs; Specify the need for Test Data types for automated testing; Create automated tests and tests data for projects; Develop automated regression suites; Integrate automated regression tests into the CI/CD pipeline; Work with teams on E2E testing strategies and plans against multiple product integration points Quality Control - Perform defect analysis, in-depth technical root cause analysis, identifying trends and recommendations to resolve complex functional issues and process improvements; Analyzes results of functional and non-functional tests and make recommendation for improvements; Performance / Resilience: Understanding application and network architecture as inputs to create performance and resilience test strategies and plans for each product and platform. Conducting the performance and resilience testing to ensure the products meet SLAs / SLOs Quality Focus - Review test cases for complete functional coverage; Review quality section of Production Readiness Review for completeness; Recommend changes to existing testing methodologies for effectiveness and efficiency of product validation; Ensure communications are thorough and accurate for all work documentation including status and project updates Risk Mitigation - Work with Product Owners, QE and development team leads to track and determine prioritization of defects fixes Show more Show less

Posted 3 days ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

DevOps Engineer (3-5 Years) Location: Lower Parel, Mumbai Expectations: Building and setting up new development tools and infrastructure. Understanding the needs of stakeholders and conveying this to developers. Working on ways to automate and improve development and release processes. Experience required: 3-5+ years of professional experience Responsibilities: Building and setting up new development tools and infrastructure Strong knowledge of AWS Strong Linux and Windows system administration background. Understanding the needs of stakeholders and conveying this to developers Working on ways to automate and improve development and release processes Improve CI/CD tooling. Implement, maintain and improve monitoring and alerting. Build and maintain highly available systems. Testing and examining code written by others and analysing results Ensuring that systems are safe and secure against cybersecurity threats Working with software developers and software engineers to ensure that development follows established processes and works as intended Assisting Product Managers with DevOps planning, execution, and query resolution Optimise infrastructure and Experience working with Docker or Kubernetes Database (MySQL, Postgres, MongoDB, etc) installation & Management Knowledge of network technologies such as TCP/IP, DNS and load balancing Must know any one programming language. Skills required: Deploy updates and fixes Proficiency with Git Optimise infrastructure costs Provide technical support Perform root cause analysis for production errors Investigate and resolve technical issues Develop scripts to automate visualisation Design procedures for system troubleshooting and maintenance Document the architecture, software used, and process followed for projects. Proficiency with at least one Infrastructure as Code (IaC) tool like Ansible, Terraform, Chef, Puppet, etc. Show more Show less

Posted 3 days ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description: Assess and understand the application implementation while working with architects and business experts Analyse business and technology challenges and suggest solutions to meet strategic objectives Build cloud native applications meeting 12/15 factor principles on OpenShift or Kubernetes Migrate Dot Net Core and/ or Framework Web/ API/ Batch Components deployed in PCF Cloud to OpenShift, working independently Analyse and understand the code, identify bottlenecks and bugs, and devise solutions to mitigate and address these issues Design and Implement unit test scripts and automation for the same using Nunit to achieve 80% code coverage Perform back end code reviews and ensure compliance to Sonar Scans, CheckMarx and BlackDuck to maintain code quality Write Functional Automation test cases for system integration using Selenium. Coordinate with architects and business experts across the application to translate key Required Qualifications: 4+ years of experience in Dot Net Core (3.1 and above) and/or Framework (4.0 and above) development (Coding, Unit Testing, Functional Automation) implementing Micro Services, REST API/ Batch/ Web Components/ Reusable Libraries etc Proficiency in C# with a good knowledge of VB.NET Proficiency in cloud platforms (OpenShift, AWS, Google Cloud, Azure) and hybrid/multi-cloud strategies with at least 3 years in Open Shift Familiarity with cloud-native patterns, microservices, and application modernization strategies. Experience with monitoring and logging tools like Splunk, Log4J, Prometheus, Grafana, ELK Stack, AppDynamics, etc. Familiarity with infrastructure automation tools (e.g., Ansible, Terraform) and CI/CD tools (e.g., Harness, Jenkins, UDeploy). Proficiency in Database like MS SQL Server, Oracle 11g, 12c, Mongo, DB2 Experience in integrating front-end with back-end services Experience in working with Code Versioning methodology as followed with Git, GitHub Familiarity with Job Scheduler through Autosys, PCF Batch Jobs Familiarity with Scripting languages like shell / Helm chats modules" Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Show more Show less

Posted 3 days ago

Apply

5.0 - 8.0 years

0 Lacs

Bihar

On-site

GlassDoor logo

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Role Purpose The purpose of this role is to work with Application teams and developers to facilitate better coordination amongst operations, development and testing functions by automating and streamlining the integration and deployment processes ͏ Do Align and focus on continuous integration (CI) and continuous deployment (CD) of technology in applications Plan and Execute the DevOps pipeline that supports the application life cycle across the DevOps toolchain — from planning, coding and building, testing, staging, release, configuration and monitoring Manage the IT infrastructure as per the requirement of the supported software code On-board an application on the DevOps tool and configure it as per the clients need Create user access workflows and provide user access as per the defined process Build and engineer the DevOps tool as per the customization suggested by the client Collaborate with development staff to tackle the coding and scripting needed to connect elements of the code that are required to run the software release with operating systems and production infrastructure Leverage and use tools to automate testing & deployment in a Dev-Ops environment Provide customer support/ service on the DevOps tools Timely support internal & external customers on multiple platforms Resolution of the tickets raised on these tools to be addressed & resolved within a specified TAT Ensure adequate resolution with customer satisfaction Follow escalation matrix/ process as soon as a resolution gets complicated or isn’t resolved Troubleshoot and perform root cause analysis of critical/ repeatable issues ͏ Deliver No Performance Parameter Measure 1. Continuous Integration,Deployment & Monitoring 100% error free on boarding & implementation 2. CSAT Timely customer resolution as per TAT Zero escalation ͏ ͏ Mandatory Skills: DevOps, CI/CD, Terraform and Github Actions. Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 3 days ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

As a Senior DevOps Engineer, you will be responsible for enhancing and integrating DevOps practices into our development and operational processes. You will work collaboratively with software development, quality assurance, and IT operations teams to implement CI/CD pipelines, automate workflows, and improve the deployment processes to ensure high-quality software delivery. Requirements Key Responsibilities: Design and implement CI/CD pipelines for automation of build, test, and deployment processes. Collaborate with development and operations teams to improve existing DevOps practices and workflows. Deploy and manage container orchestration platforms such as Kubernetes and Docker. Monitor system performance and troubleshoot issues to ensure high availability and reliability. Implement infrastructure as code (IaC) using tools like Terraform or CloudFormation. Participate in incident response and root cause analysis activities. Establish best practices for DevOps processes, security, and compliance. Qualifications and Experience: Bachelor's degree with DevOps certification 7+ years of experience in a DevOps or related role. Proficiency in cloud platforms such as AWS, Azure, or Google Cloud. Experience with CI/CD tools such as Jenkins, GitLab, or CircleCI. Developemnt (JAVA or Python ..etc) - Advanced Kubernetes usage and admin - Advanced AI - Intermediate CICD development - Advanced Strong collaboration and communication skills. Show more Show less

Posted 3 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies