Home
Jobs

9125 Terraform Jobs - Page 49

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

3 - 7 Lacs

Chennai

On-site

Job Code: Cloud Infrastructure Engineer Qualification (Educational): Graduation Location City: Chennai, Madurai Location Country: India Years of Experience: 8 to 13 Years Skills Required In-depth knowledge of AWS and Azure platforms. Expertise in Identity and Access Management. Exposure to Active Directory and Domain Controller, Group policies and user authentication. Expertise on scripting tools like CLI, PowerShell, Lambda, Python, and Terraform. Hands-on Systems Administration of Windows and Linux platforms with experience in Patch Management. Exposure to native cloud monitoring tools Experience in configuring Load balancing and Auto-scale sets. Exposure to network fundamentals such as TCP/IP, DNS, DHCP and Routing, BGP and experience in configuring network tunnels (IPSEC, P2S, AWS/Azure VPN, Direct Connect / Express Route). Exposure to SFTP and SSL protocols and features like WAF. Excellent verbal, written and presentation skills. Problem solving skills, ability to analyze, resolve complex infrastructure issues. Certification on AWS or Azure is desirable Experience 8 – 13 Years Primary Skills AWS OR Azure OR OCI Cloud Infrastructure OR Infrastructure Management Server Administration (Windows/Linux) Networking Monitoring Automation / IaC Skills Terraform, Ansible, CloudFormation, PowerShell, Bash, Python, "Infrastructure as Code" OR IaC, Automation, Scripting Operational Tasks Patching, Backup, Security Hardening, Monitoring, Server Provisioning, Cloud Migration, DR Setup, Incident Management, Compliance Monitoring Exclude (Optional) NOT Developer NOT DevOps Engineer NOT Application Support (If you want to avoid app-focused profiles) Industries IT Services, Cloud Service Providers, Managed Service Providers (MSP), SaaS, Consulting Keyword String Example (AWS OR Azure OR GCP) AND ("cloud infrastructure" OR "infra engineer" OR "cloud operations") AND (Terraform OR Ansible OR scripting OR automation OR IaC) AND (Windows OR Linux OR networking) AND (monitoring OR backup OR patching OR security) Prioritize candidates with hands-on cloud platform infra experience (not just ticketing/support). Look for profiles mentioning Terraform/Ansible + Monitoring/Backup/Patching – indicates automation in infra. Avoid candidates primarily involved in DevOps pipelines or application deployment , unless infra-focused.

Posted 5 days ago

Apply

0 years

4 - 8 Lacs

Chennai

On-site

Management Level F Equiniti is a leading international provider of shareholder, pension, remediation, and credit technology. With over 6000 employees, it supports 37 million people in 120 countries. EQ India began its operations in 2014 as a Global India Captive Centre for Equiniti, a leading fintech company specialising in shareholder management. Within a decade, EQ India strengthened its operations and transformed from being a capability centre to a Global Competency Centre, to support EQ's growth story worldwide. Capitalising on India's strong reputation as a global talent hub for IT / ITES, EQ India has structured the organisation to be a part of this growth story. Today, EQ India has evolved as an indispensable part of EQ Group providing critical fintech services to the US and UK. EQ's vision is to be the leading global share registrar, offering complementary services to its client base and our values set the core foundations to our success. We are TRUSTED to deliver on our commitments, COMMERCIAL in building long term value, COLLABORATIVE in our approach and we IMPROVE by continually enhancing our skills and services. There has never been a better time to join EQ. Role Summary EQ is looking for a Staff Sever Operations Engineer to help us deploy and manage our Global infrastructure hosted in multiple data centre and cloud. We are searching for an enthusiastic engineer ready to join our team to help us evaluate, implement, deploy and support technologies that enable our users to collaborate with UK & US customers. The motivated candidate should have a firm understanding of IT system Infrastructure with good knowledge and understanding of server infrastructure (Windows & Linux), fileservers, security fundamentals, network devices such as load balancers and Firewalls. Responsibilities and Skills Required Proactively maintain and develop all Linux infrastructure technology to maintain a 24x7x365 uptime service. Proactively monitoring system performance and capacity planning. IN depth knowledge of Linux: RedHat, CentOS, Debian, etc. Manage, coordinate, and implement software upgrades, patches, hot fixes on servers, workstations, and network hardware Solid knowledge of protocols such as DNS, HTTP, LDAP, SMTP and SNMP Expert in Shell, Perl, and/or Python, terraform scripting Experience in managing Linux servers. Experience in AWS, Azure, EC2, Fsx, AMs, S3, Terraform, Puppet. Additional Linux certifications (RHCT, RHCE and LPIC) will be considered an advantage Strong grasp on configuration management tools, such as Puppet and Chef, Ansible. Experience with DevOps tools such as Jenkins, Git, etc. AWS Cloud Administration will be added advantage EQ India Benefits: Being a permanent member of the team at EQ as you will be working on US/UK business hours you will be rewarded by our company benefits, these are just a few of what is on offer: Comprehensive Medical & Life Assurance cover Maternity leave of 6 months full pay, 10days paid paternity leave Long Term Incentive Plan (LTIP) for all colleagues Accidental & Life cover 2 times of concerned CTC We are committed to equality of opportunity for all staff and applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships. Please note any offer of employment is subject to satisfactory pre-employment screening checks.

Posted 5 days ago

Apply

0 years

0 - 0 Lacs

Coimbatore

On-site

We are seeking AWS Cloud DEVOPS Engineers, who will be part of the Engineering team and collaborating with software development, quality assurance, and IT operations teams to deploy and maintain production systems in the cloud. This role requires a engineer who is passionate about provisioning and maintaining a reliable, secure, and scalable production systems. We are a small team of highly skilled engineers and looking forward to adding a new member who wishes to advance in one's career by continuous learning. Selected candidates will be an integral part of a team of passionate and enthusiastic IT professionals, and have tremendous opportunities to contribute to the success of the products. What you will do Ideal candidate will be responsible for Deploying, automating, maintaining, managing and monitoring an AWS production system including software applications and cloud-based infrastructure Monitor system performance and troubleshoot issues Engineer solutions using AWS services (Cloud Formation, EC2, Lambda, Route 53, ECS, EFS ) Use DevOps principles and methodologies to enable the rapid deployment of software and services by coordinating software development, quality assurance, and IT operations Making sure AWS production systems are reliable, secure, and scalable Create and enforce policies related to AWS usage including sample tagging, instance type usage, data storage Resolving problems across multiple application domains and platforms using system troubleshooting and problem-solving techniques Automating different operational processes by designing, maintaining, and managing tools Provide primary operational support and engineering for all Cloud and Enterprise deployments Lead the organisations platform security efforts by collaborating with the core engineering team Design, build, and maintain containerization using Docker, and manage container orchestration with Kubernetes Set up monitoring, alerting, and logging tools (e.g., zabbix) to ensure system reliability. Collaborate with development, QA, and operations teams to design and implement CI/CD pipelines with Jenkins Develop policies, standards, and guidelines for IAC and CI/CD that teams can follow Automate and optimize infrastructure tasks using tools like Terraform, Ansible, or CloudFormation. Support InfoSec scans and compliance audits Ensure security best practices in the cloud environment, including IAM management, security groups, and network firewalls. Contribute sto the optimization of system performance and cost. Promotes knowledge sharing activities within and across different product teams by creating and engaging in communities of practice and through documentation, training, and mentoring Keep skills up to date through ongoing self-directed training What skills are required Ability to learn new technologies quickly. Ability to work both independently and in collaborative teams to communicate design and build ideas effectively. Problem-solving, and critical-thinking skills including ability to organize, analyze, interpret, and disseminate information. Excellent spoken and written communication skills Must be able to work as part of a diverse team, as well as independently Ability to follow departmental and organizational processes and meet established goals and deadlines Knowledge of EC2 (Auto scaling, Security Groups ),VPC,SQS, SNS,Route53,RDS, S3, Elastic Cache, IAM, CLI Server setup/configuration (Tomcat,ngnix ) Experience with AWS—including EC2, S3, CloudTrail, and APIs Solid understanding of EC2 On-Demand, Spot Market, and Reserved Instances knowledge of Infrastructure As Code tools including Terraform, Ansible, or CloudFormation Knowledge of scripting and automation using Python, Bash, Perl to automate AWS tasks Knowledge of code deployment tools Ansible and CloudFormation scripts Support InfoSec scans and compliance audits Basic knowledge of network architecture, DNS, and load balancing. knowledge of containerization technologies like Docker and orchestration platforms like Kubernetes Understanding of monitoring and logging tools(e.g. zabbix) Familiarity with version control systems (GIT) Knowledge of microservices architecture and deployment. Bachelor's degree in Engineering or Masters degree in computer science. Note : Candidates who have passed out in the year 2023 or 2024 can only apply for this Internship. This is Internship to Hire position and Candidates who complete the internship will be offered full-time position based on performance Job Types: Full-time, Permanent, Fresher, Internship Contract length: 6 months Pay: ₹5,500.00 - ₹7,000.00 per month Schedule: Day shift Monday to Friday Morning shift Expected Start Date: 01/07/2025

Posted 5 days ago

Apply

8.0 years

7 - 9 Lacs

Noida

On-site

We are looking for a Lead Software Engineer to join our Technology team at Clarivate. We have a great skill set in Java, Spring, and AWS and we would love to speak with you if you have good analytical skills, passionate about technology, and have the curiosity and drive to take on new possibilities. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centric design. About You (Skills & Experience Required): 8+ years of experience developing web-based applications using Java8 , Spring, Hibernate, MS SQL Server, and related technologies. Hands -on experience on Spring boot, Java 8 , Elastic Search ,AWS and No -SQL databases like Mongo DB, or Redis. Have Good Knowledge on Oracle. Should be proficient in writing SQL query and Stored procedure. Experience with AWS services, such as EC2, S3, Elb, EKS ,Terraform and RDS Good Knowledge in k8s and docker. Experience in writing JUnit test cases Good understanding of software development principles, including Distributed System, object-oriented programming, Design Patterns, and SOLID principles. Strong experience in MVC, Spring, Hibernate, Microservices, Application/Web Servers. Strong understanding/expertise in building REST APIs/ Micro services architecture. Data modelling and database design experience including NoSQL DBs Experience with GIT Hub, Maven, Gradle, Jenkins, Docker, and other CI/CD Platforms. Expertise in architectural styles and design patterns It would be great if you also have: Familiar with Agile development methodologies. Exposure to Cloud native technologies and integration patterns Familiar with front-end web technologies like Angular and React js. Experience in Linux is a plus. What will you be doing in this role? You will work closely with your agile team for creating high-quality, innovative, and intuitive solutions to challenging and complex business problems. To be successful in this role, you will need strong technical and analytical skills to be applied to overcome challenges that arise when working across multiple technologies. You will harness your creativity to help develop solutions that consider our users and aim to significantly improve the productivity of their work day Some highlights of this role: Develop Strong Architecture and Design using best practices, patterns, and business acumen. Provide leadership and technical guidance to coach, motivate, and lead team members to their optimum performance levels and career development. Participate in the full software development lifecycle, including requirements gathering, design, development, testing, and deployment. Write clean, efficient, and well-documented code that adheres to industry best practices and coding standards. Develop and maintain unit and unit integration tests that are part of an automated Continuous Integration pipeline. Work closely with other teams (e.g., QA, Product) to release high-quality software. Participate in group improvement activities and initiatives to improve process and product quality in pursuit of excellence. About the Team Our team is a group of highly motivated professionals who are passionate about using technology to make a real difference in the world. We specialize in developing cloud-native applications using a variety of technologies, including Java, Java-based tools such as Spring and Hibernate, and No-SQL DBs. With a diverse range of skills and backgrounds, we embrace new challenges and take pride in our ability to deliver high-quality work that drives meaningful results. As a member of our team, you will have the opportunity to learn from experienced mentors and contribute to world-class products and innovative solutions in the field of IP Hours of Work This is a permanent position with Clarivate, 9 hours per day including lunch break. At Clarivate, we are committed to providing equal employment opportunities for all qualified persons with respect to hiring, compensation, promotion, training, and other terms, conditions, and privileges of employment. We comply with applicable laws and regulations governing non-discrimination in all locations.

Posted 5 days ago

Apply

12.0 years

5 - 6 Lacs

Noida

On-site

Description Job Title: Solution Architect Designation : Senior Company: Hitachi Rail GTS India Location: Noida, UP, India Salary: As per Industry Company Overview: Hitachi Rail is right at the forefront of the global mobility sector following the acquisition. The closing strengthens the company's strategic focus on helping current and potential Hitachi Rail and GTS customers through the sustainable mobility transition – the shift of people from private to sustainable public transport, driven by digitalization. Position Overview: We are looking for a Solution Architect that will be responsible for translating business requirements into technical solutions, ensuring the architecture is scalable, secure, and aligned with enterprise standards. Solution Architect will play a crucial role in defining the architecture and technical direction of the existing system. you will be responsible for the design, implementation, and deployment of solutions that integrate with transit infrastructure, ensuring seamless fare collection, real-time transaction processing, and enhanced user experiences. You will collaborate with development teams, stakeholders, and external partners to create scalable, secure, and highly available software solutions. Job Roles & Responsibilities: Architectural Design : Develop architectural documentation such as solution blueprints, high-level designs, and integration diagrams. Lead the design of the system's architecture, ensuring scalability, security, and high availability. Ensure the architecture aligns with the company's strategic goals and future vision for public transit technologies. Technology Strategy : Select the appropriate technology stack and tools to meet both functional and non-functional requirements, considering performance, cost, and long-term sustainability. System Integration : Work closely with teams to design and implement the integration of the AFC system with various third-party systems (e.g., payment gateways, backend services, cloud infrastructure). API Design & Management : Define standards for APIs to ensure easy integration with external systems, such as mobile applications, ticketing systems, and payment providers. Security & Compliance : Ensure that the AFC system meets the highest standards of data security, particularly for payment information, and complies with industry regulations (e.g., PCI-DSS, GDPR). Stakeholder Collaboration : Act as the technical lead during project planning and discussions, ensuring the design meets customer and business needs. Technical Leadership : Mentor and guide development teams through best practices in software development and architectural principles. Performance Optimization : Monitor and optimize system performance to ensure the AFC system can handle high volumes of transactions without compromise. Documentation & Quality Assurance : Maintain detailed architecture documentation, including design patterns, data flow, and integration points. Ensure the implementation follows best practices and quality standards. Research & Innovation : Stay up to date with the latest advancements in technology and propose innovative solutions to enhance the AFC system. Skills: 1. Equipment Programming Languages DotNet (C#), C/C++, Java, Python 2. Web Development Frameworks ASP.NET Core (C#), Angular 3. Microservices & Architecture Spring Cloud, Docker, Kubernetes, Istio, Apache Kafka, RabbitMQ, Consul, GraphQL 4. Cloud Platforms Amazon Web Services (AWS) Google Cloud Platform (GCP) Microsoft Azure Kubernetes on Cloud (e.g., AWS EKS, GCP GKE) Terraform (Infrastructure as Code) 5. Databases Relational Databases (SQL) NoSQL Databases Data Warehousing 6. API Technologies SOAP/RESTful API Design GraphQL gRPC OpenAPI / Swagger (API Documentation) 7. Security Technologies OAuth2 / OpenID Connect (Authentication & Authorization) JWT (JSON Web Tokens) SSL/TLS Encryption OWASP Top 10 (Security Best Practices) Vault (Secret Management) Keycloak (Identity & Access Management) 8. Design & Architecture Tools UML (Unified Modeling Language) Lucidchart / Draw.io (Diagramming) PlantUML (Text-based UML generation) C4 Model (Software architecture model) Enterprise Architect (Modeling) 9. Miscellaneous Tools & Frameworks Apache Hadoop / Spark (Big Data) Elasticsearch (Search Engine) Apache Kafka (Stream Processing) TensorFlow / PyTorch (Machine Learning/AI) Redis (Caching & Pub/Sub) DevOps & CI/CD Tools Education: Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Experience Required: 12+ years of experience in solution architecture or software design. Proven experience with enterprise architecture frameworks (e.g., TOGAF, Zachman). Strong understanding of cloud platforms (AWS, Azure, or Google Cloud). Experience in system integration, API design, microservices, and SOA. Familiarity with data modeling and database technologies (SQL, NoSQL). Strong communication and stakeholder management skills. Preferred: Certification in cloud architecture (e.g., AWS Certified Solutions Architect, Azure Solutions Architect Expert). Experience with DevOps tools and CI/CD pipelines. Knowledge of security frameworks and compliance standards (e.g., ISO 27001, GDPR). Experience in Agile/Scrum environments. Domain knowledge in [insert industry: e.g., finance, transportation, healthcare]. Soft Skills: Analytical and strategic thinking. Excellent problem-solving abilities. Ability to lead and mentor cross-functional teams. Strong verbal and written communication.

Posted 5 days ago

Apply

0 years

4 - 8 Lacs

Noida

On-site

Req ID: 327889 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a .Net Technical consultant to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Programming languages: C# (.NET MVC, .NET Core and .NET 6/7) UI: Angular, Javascript, CSS, ASP.NET MVC API: Rest , or Azure functions or Azure Durable Functions Containerization : Azure Kubernetes Service, Kubernetes (open source) and Docker Cloud: Azure application insights, Azure service bus, Azure storage account, Azure API management CI /CD: Azure pipelines, Terraform Scripting: Powershell, Bash Database: Microsoft SQL Server or NoSQL (e.g. CosmosDB) and Oracle Test Automation for API's: Karate Labs & Postman Test Automation for UI: Saude Labs , Selenium, Specflow, xUnit or nUnit, or Test techniques: Behavior Driven Development (BDD) or Test-Driven Development (TDD) About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 5 days ago

Apply

10.0 - 14.0 years

0 Lacs

Noida

On-site

Azure Cloud Infra Consultant Architect Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description What you'll be doing: Be the architecture lead, providing mentorship and guidance to technical resources. Creating architectural standards to delivery Azure solutions to our end clients Create deep subject matter expertise within the Practice and nurture talent across the grades. Working as part of the Practice leadership team to drive our strategic partnership with Microsoft to support and enable innovation, investment and growth. Cultivate and enable a professional services culture and discipline, where the teams influence, sell and deliver specialist solutions and take responsibility for self-learning, career management and opportunities. Work directly with clients to present and deliver Azure solutions. What you’ll bring: Demonstrable experience in Azure with a technical background and experience in Azure migrations, architecture, and automation. Demonstrable experience leading delivery teams, developing and mentoring people. Demonstrable knowledge of Microsoft solutions and application to client strategy. Strong communication and leadership, with experience in developing metrics around utilization, Great Place to Work, contribution, productivity and GPS scores. Core Technical Knowledge Required: Azure IaaS (virtual machines, storage, networking, security). Azure Backup & Recovery Services. Azure Governance (Blueprints, policies, tagging, cost management). Azure SQL Databases (Managed Instances, PaaS, IaaS). Azure Security (Zero Trust, Defender for Cloud, Sentinel, Entra, AIP). Azure Serverless and integration (Batch, Function, Logic Apps, EventGrid). Azure Containers (AKS, ACI, ACR). Active Directory\Entra ID (Azure AD, Azure AD DS, on premises AD DS). On premises infrastructure, virtualisation technologies or applications Experience with Windows Server\Linux OS. Experience with Infrastructure as Code (ARM, Bicep, Terraform, PowerShell). Total Experience Expected: 10-14 years Qualifications Certifications: Microsoft Azure Solutions Architect Expert Microsoft Cyber Security Architect Expert (Desirable) Microsoft DevOps Engineer Expert (Desirable) Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Summary: The Help Desk Technician should be able to diagnose and resolve challenging problems quickly. You should be well-versed in all aspects of computer systems configuration and maintenance. Your goal will be to ensure that our technology runs smoothly for both employees. This position will be in our Hyderabad office with a combination of in-person and remote work. Job Responsibilities: Respond to queries by phone, email, in person, instant message, or through our ticketing system. Provide L2/L3 technical support for users, Windows-based systems, and IT infrastructure. User Management: Create, activate, deactivate users; manage AD accounts and group policies. Active Directory & Network Services: Manage and troubleshoot Active Directory, DNS, DHCP, GPOs. Firewall & Bandwidth Management: Monitor firewalls, setup/manage VPNs, and control bandwidth utilization. System Installation & Patching: Install and configure Windows/Linux OS, manage software installations, ensure system patching via WSUS server. Hardware Management: Identify, troubleshoot, and coordinate repair for hardware failures. Antivirus & Security: Monitor and maintain antivirus solutions across all endpoints. Ticket Management: Handle support tickets and maintain issue logs with timely resolutions. Vendor & ISP Coordination: Manage communication with IT vendors and escalate ISP issues when needed. Communication: Work closely with internal and external stakeholders for smooth IT operations. AWS Cloud Basics (Preferred): Good understanding of AWS core services such as EC2, S3, RDS, VPC, Security Groups, and Lambda. Required Skills: Hands-on experience with Windows OS / Windows Server (2016–2022). Good knowledge of Active Directory, Group Policies, DNS/DHCP. Working knowledge of AWS services like EC2, S3, IAM, VPC. Understanding of CI/CD tools (e.g., Jenkins, GitHub Actions, GitLab CI). Familiarity with basic scripting (PowerShell/Bash). Awareness of IT security basics, patching, and user access control. Strong troubleshooting and communication skills. Nice to Have: Exposure to Linux OS or hybrid environments. Tools: Docker, Ansible, Terraform (even basic hands-on is welcome). Experience with ticketing platforms (e.g., JIRA, ServiceNow, Freshservice). Certifications: AWS Cloud Practitioner, CompTIA Security+, or equivalent. Company Overview: Verisys transforms provider data, workforce data, and relationship management. More than 400 healthcare, life science, and background screening organizations depend on us to credential providers, improve data quality, publish compliant provider directories, and conduct employment verifications. Our comprehensive solutions deliver accurate and secure information. As a result, we’re the largest outsourced credentials verification organization in the United States. Since we’ve partnered with the most complex institutions in healthcare for decades, we can help organizations of any size discover their true potential. At Verisys, you can have a rewarding career on every level. In addition to challenging and meaningful work, you will have the chance to give back to your community, make a positive impact on the environment, participate in a range of diversity and inclusion initiatives, and find the support, coaching, and training it takes to advance your career. Our commitment to individual choice lets you customize aspects of your career path, your educational opportunities, and your benefits. And our culture of innovation means your ideas on how to improve our business, and our clients will be heard. Show more Show less

Posted 5 days ago

Apply

1.0 years

6 - 9 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Team Player in an Agile team within a Release Team/Value Stream Develop and automate business solutions by creating new and modifying existing software applications Technically hands on and excellent in Design, Coding and Testing Collectively responsible for end to end product quality Participates and contributes in Sprint Ceremonies Promote and develop the culture of collaboration, accountability & quality Provides technical support to team. Helps team in resolving technical issues Closely working with Tech Lead, Onshore partners, deployment and infrastructure teams Independently drive some of the product and pillar level initiatives Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.tech and/or MS in computer science or equivalent 1+ years of software development (and design) experience using programming and scripting languages (java, java script,) 1+ years of software development with Microservices architecture and spring boot java framework 1+ years of experience in managing cloud-based infrastructure and container orchestration platforms (e.g., AWS, Azure, Google Cloud, Kubernetes) and automation tools (e.g., Ansible, Terraform) to automate tasks related to provisioning, configuration, and maintenance of GitHub Actions and runner farm environments Solid understanding of software development principles, version control systems (particularly Git and GitHub), continuous integration/continuous deployment (CI/CD) pipelines, and infrastructure as code (IaC) concepts In-depth knowledge of GitHub features, including Actions, workflows, repositories, branches, pull requests, and permissions management Testing using Data Quality Framework DevOps - Jenkins, GitHub, Docker, Redis, Sonar, Fortify Development Methodology / Engineering Practices - Agile (SCRUM / KANBAN / SAFe) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 5 days ago

Apply

7.0 years

5 - 8 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Manage and maintain infrastructure for development, testing, and deployment of applications Implement and manage CI/CD pipelines to automate build, test, and deployment processes Set up monitoring and logging systems to track performance and health of applications and infrastructure Collaborate with development teams to ensure infrastructure and deployment processes meet their needs Ensure infrastructure and deployment processes comply with security and regulatory requirements Automate repetitive tasks to improve efficiency and reduce risk of human error Optimize performance of applications and infrastructure to ensure efficiency and scalability Create and maintain documentation for infrastructure, deployment processes, and other relevant areas Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills CI/CD Tools: Jenkins, GitHub Actions, Azure DevOps Configuration Management: Ansible, Puppet, Chef Containerization: Docker, Kubernetes Cloud Platforms: AWS, Azure, Google Cloud Monitoring and Logging: Prometheus, Grafana, ELK stack (Elasticsearch, Logstash, Kibana), Splunk Scripting Languages: Python, Bash, PowerShell Infrastructure as Code (IaC): Terraform, CloudFormation Version Control: Git Security Best Practices: Knowledge of security best practices and tools for securing the DevOps pipeline Collaboration Tools: Microsoft Teams, Slack Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field Certifications in DevOps or related fields 7+ years of experience in DevOps roles with enterprise-scale impact Experience managing DevOps projects end-to-end Solid problem-solving and analytical skills Excellent communication skills for technical and non-technical audiences Knowledge of security and compliance standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission. #Gen

Posted 5 days ago

Apply

4.0 - 8.0 years

0 Lacs

Noida

On-site

Join our Team About this opportunity: Join Ericsson, a global leader in communications technology and services. As a Cloud Infrastructure Engineer, you will be instrumental in deploying and managing cutting-edge cloud solutions using OpenStack and OpenShift. Your role involves advising customer teams on best practices, optimizing cloud infrastructure, and ensuring high performance and security in dynamic environments. At Ericsson, you will contribute to transforming insights into innovation, enabling customers to overcome IT complexities and capitalize on market opportunities at speed. What you will do: Understanding in developing automation scripts using Bash scripting and Python to enhance the efficiency and quality of cloud operations. Understanding of Software-Defined Networking (SDN) and Network Functions Virtualization (NFV). Experience with automation tools like Ansible, Puppet, and Terraform for infrastructure provisioning and configuration management. Practical experience with Redhat OCP/Kubernetes for container orchestration and managing microservices-based applications. Hands-on experience and expertise with OpenStack and Linux platforms are required. Handle OpenStack cloud administration tasks such as performance tuning, troubleshooting, and resource management. Manage availability zones, host aggregation, tenant management, and virtual resources. Troubleshoot and resolve issues related to OpenStack services and underlying infrastructure. Monitor system health using tools like Prometheus, Grafana, and ELK. Should have good experience in Linux Administration for both physical and virtual servers (OS Installation, Performance Monitoring/Optimization, Kernel Tuning, LVM management, File System Management, Security Management). Manage and troubleshoot Redhat Enterprise Linux (RHEL 6.x, 7.x, 8.x), Ubuntu systems, ensuring operational efficiency and seamless system upgrades. Configure and troubleshoot network components including TCP/IP, IPv4/IPv6, VLAN, VXLAN, bridging, routing, IP Tables, DNS, and DHCP. Operate and manage hardware through interfaces like iLO, iDRAC, and CIMC. Implement robust security measures to safeguard customer workloads and data, adhering to industry best practices and compliance standards. Strong knowledge of Linux administration, cloud concepts, network protocols, and automation tools. Hands on experience with CEPH storage operations like OSD delete, recreate, set/unset flags as per operations requirement. Ensure SLA's and Operational standards are met including ITSM guidelines The Skills you bring Minimum 4-8 years of experience in any Private Cloud technology. Hands on experience in OpenStack cloud administration tasks and Linux administration. Candidates with Redhat OpenStack, RHCSA, and RHCE certifications will be given preference. Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply? Click Here to find all you need to know about what our typical hiring process looks like. Encouraging a diverse and inclusive organization is core to our values at Ericsson, that's why we champion it in everything we do. We truly believe that by collaborating with people with different experiences we drive innovation, which is essential for our future growth. We encourage people from all backgrounds to apply and realize their full potential as part of our Ericsson team. Ericsson is proud to be an Equal Opportunity Employer. learn more. Primary country and city: India (IN) || Noida Req ID: 768556

Posted 5 days ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Role Overview We are looking for a Junior Node.js Developer (Fresher) to join our growing team. This is an excellent opportunity for fresh graduates passionate about backend development and eager to learn. In this role, you will work on Node.js based applications, assist in API development, and gain exposure to modern DevOps tools like Terraform, Prometheus, and Grafana. You will be involved in building scalable frameworks, writing clean code, and participating in release processes to deliver reliable software. This is a Hybrid position based in Bangalore. You must be within a commutable distance from the location. You will be required to be onsite on an as-needed basis; when not working onsite, you will work remotely from your home location. About The Role Assist in building scalable and modular backend frameworks Support API design and integration with front-end and back-end services Write unit tests to maintain performance and reliability Troubleshoot and resolve technical issues with the team Participate in CI/CD and release management processes About You We are looking for a Fresh/Junior candidate with 0 to 2 years of relevant experience who possesses strong critical thinking and analytical skills on backend development using Go/Node Basic understanding of TypeScript and Node.js Familiarity with backend frameworks like ExpressJS, KoaJS, or NestJS Awareness of testing tools like Jest or similar Understanding of databases and Pub/Sub technologies Basic knowledge of CI/CD pipelines Strong communication skills and a proactive, collaborative mindset Willingness to learn and grow in a fast-paced development environment Company Overview McAfee is a leader in personal security for consumers. Focused on protecting people, not just devices, McAfee consumer solutions adapt to users’ needs in an always online world, empowering them to live securely through integrated, intuitive solutions that protects their families and communities with the right security at the right moment. Company Benefits And Perks We work hard to embrace diversity and inclusion and encourage everyone at McAfee to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Bonus Program Pension and Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to diversity which is why McAfee prohibits discrimination based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Backend Engineer Location: Remote Employment Type: Full-time Experience: 10+ Years Industry: SaaS / Energy / Mobility / Cloud Infrastructure 🚀 Role Overview We are looking for a Senior Backend Engineer with deep expertise in Python and scalable system architecture. This is a hands-on individual contributor (IC) role where you’ll design and develop high-performance, cloud-native backend services for enterprise-scale platforms. You’ll work closely with cross-functional teams to deliver robust, production-grade solutions. 🛠️ Key Responsibilities Design and build distributed, microservices-based systems using Python Develop RESTful APIs, background workers, schedulers, and scalable data pipelines Lead architecture discussions, technical reviews, and proof-of-concept initiatives Model data using SQL and NoSQL technologies (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Ensure high availability and observability using tools like CloudWatch, Grafana, and Datadog Automate infrastructure and CI/CD workflows using Terraform, GitHub Actions, or Jenkins Prioritize security, scalability, and fault-tolerance across all services Own the entire lifecycle of backend components—from development to production support Document system architecture and contribute to internal knowledge sharing ✅ Requirements 10+ years of backend development experience with strong Python proficiency Deep understanding of microservices, Docker, Kubernetes, and cloud-native development (AWS preferred) Expertise in API design, authentication (OAuth2), rate limiting, and best practices Experience with message queues and async systems (Kafka, SQS, RabbitMQ) Strong database knowledge—both relational and NoSQL Familiarity with DevOps tools: Terraform, CloudFormation, GitHub Actions, Jenkins Effective communicator with experience working in distributed, fast-paced teams Show more Show less

Posted 5 days ago

Apply

0.0 - 3.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Indeed logo

Job Title: Cloud Infrastructure & Security Engineer Location: Bangalore / Gurugram (Hybrid – 1–2 days a week in office as needed) Experience Required: 5+ years Compensation: ₹20 – ₹30 LPA Job Type: Full-Time Domain Preference: BFSI experience will be a strong value addition Job Summary We are looking for an experienced and technically strong Cloud Infrastructure & Security Engineer to join our Network Security and Information Security team. This role demands hands-on experience with cloud technologies (AWS, Azure, and GCP), a strong understanding of infrastructure automation, security policies including firewall ACLs, and a proven track record of managing support functions in complex cloud environments. Candidates with prior experience in the BFSI sector are highly preferred. This position offers the opportunity to work in a high-impact role where you’ll manage mission-critical infrastructure while ensuring performance, security, and reliability. Key Responsibilities Cloud Infrastructure Operations: Oversee and support cloud infrastructure across AWS, Azure, and GCP platforms. Manage provisioning, monitoring, tuning, and lifecycle management of cloud resources. Ensure system availability, incident response, and performance optimization. Respond to infrastructure and security incidents, identify root causes, and implement long-term fixes. Collaborate across DevOps, security, compliance, and engineering to improve system efficiency and performance. Infrastructure as Code (IaC) & Automation: Use Terraform to define and deploy infrastructure as code. Automate deployment and configuration tasks using CI/CD tools (GitHub Actions, Jenkins, Azure DevOps). Maintain and manage infrastructure in version-controlled repositories (GitHub). Enforce infrastructure policies, guardrails, and security configurations via code. Network Security & Compliance: Implement and manage ACLs , firewall rules , and network segmentation . Proactively identify and mitigate security risks and infrastructure vulnerabilities. Support internal audits, compliance checks, and data protection policies. Monitoring, Observability & Incident Response: Design and implement robust monitoring and alerting systems using leading observability tools. Define and track operational KPIs (availability, latency, throughput, etc.). Create and maintain runbooks, SOPs , and escalation protocols for cloud incidents. Required Skills & Qualifications 5+ years of experience in cloud infrastructure, engineering, or support roles. Hands-on expertise in AWS, GCP, and Azure cloud environments. Strong experience managing firewalls , ACLs , VPCs , VPNs, routing, and DNS in cloud. Deep knowledge of Terraform and infrastructure-as-code principles. Experience building CI/CD pipelines for cloud resource provisioning. Proficiency with source control (e.g., Git), GitHub workflows, and configuration management. Solid understanding of incident response , disaster recovery , and performance monitoring . Excellent troubleshooting, communication, and documentation skills. Preferred Certifications AWS Certified Solutions Architect / DevOps Engineer Google Cloud Professional Cloud Architect / Engineer Microsoft Certified: Azure Administrator / Solutions Architect Bonus Skills (Preferred but not mandatory) Experience working in or supporting BFSI or regulated enterprise environments. Familiarity with scripting languages like Python or Bash. Exposure to tools like Jenkins, GitLab CI , Prometheus, ELK stack , or Datadog . Experience working with hybrid cloud architecture and on-premises integration . Understanding of risk assessment , compliance , and cloud governance frameworks. Job Types: Full-time, Permanent Pay: ₹2,000,000.00 - ₹3,000,000.00 per year Schedule: Day shift Evening shift Monday to Friday Ability to commute/relocate: Bangalore, Karnataka: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Cloud infrastructure: 5 years (Required) cloud environment: 3 years (Required) Firewall: 3 years (Required) Terraform: 3 years (Required) Work Location: In person

Posted 5 days ago

Apply

1.0 years

0 - 0 Lacs

Indore

On-site

Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: ₹25,509.47 - ₹75,958.92 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: DevOps: 1 year (Required) Language: English (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person

Posted 5 days ago

Apply

5.0 years

0 Lacs

Andhra Pradesh

On-site

Overview: We are seeking a skilled and proactive Support Engineer with deep expertise in Azure cloud services, Kubernetes, and DevOps practices. with 5+ years of industry experience with same technologies. The ideal candidate will have experience working with Azure services, including Kubernetes, API management, monitoring tools, and various cloud infrastructure services. You will be responsible for providing technical support, managing cloud-based systems, troubleshooting complex issues, and ensuring smooth operation and optimization of services within the Azure ecosystem. Key Responsibilities: Provide technical support for Azure-based cloud services, including Azure Kubernetes Service (AKS), Azure API Management, Application Gateway, Web Application Firewall, Azure Monitor with KQL queries Manage and troubleshoot various Azure services such as Event Hub, Azure SQL, Application Insights, Virtual Networks and WAF. Work with Kubernetes environments, troubleshoot deployments, utilizing Helm Charts, check resource utilization and managing GitOps processes. Utilize Terraform to automate cloud infrastructure provisioning, configuration, and management. Troubleshoot and resolve issues in MongoDB and Microsoft SQL Server databases, ensuring high availability and performance. Monitor cloud infrastructure health using Grafana and Azure Monitor, providing insights and proactive alerts. Provide root-cause analysis for technical incidents, propose and implement corrective actions to prevent recurrence. Continuously optimize cloud services and infrastructure to improve performance, scalability, and security. Required Skills & Qualifications: Azure Certification (e.g., Azure Solutions Architect, Azure Administrator) with hands-on experience in Azure services such as AKS, API Management, Application Gateway, WAF, and others. Any Kubernetes Certification (e.g CKAD or CKA) with Strong hands-on expertise in Kubernetes Helm Charts, and GitOps principles for managing/toubleshooting deployments. Hands-on experience with Terraform for infrastructure automation and configuration management. Proven experience in MongoDB and Microsoft SQL Server, including deployment, maintenance, performance tuning, and troubleshooting. Familiarity with Grafana for monitoring, alerting, and visualization of cloud-based services. Experience using Azure DevOps tools, including Repos and Pipelines for CI/CD automation and source code management. Strong knowledge of Azure Monitor, KQL Queries, Event Hub, and Application Insights for troubleshooting and monitoring cloud infrastructure. Solid understanding of Virtual Networks, WAF, Firewalls, and other related Azure networking tools. Excellent troubleshooting, analytical, and problem-solving skills. Strong written and verbal communication skills, with the ability to explain complex technical issues to non-technical stakeholders. Ability to work in a fast-paced environment and manage multiple priorities effectively. Preferred Skills: Experience with cloud security best practices in Azure. Knowledge of infrastructure as code (IaC) concepts and tools. Familiarity with containerized applications and Docker. Education: Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 5 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Title: - Devops Engineer Candidate Specifications Candidate should have 5-8 years of experience. Job Description Candidates should have experience in depth understanding of how to use and the roles of the tools for specific DevOps functions and Design, develop and maintain CI and CD pipelines. Candidate should have good experience in AWS, Terraform, Jenkins and Kubernetes. Candidates should have good experience in Consolidation by transferring instances into other accounts and Implementation of automated routines by Terraform. Candidates should have good knowledge in Linux, AIX, JAVA, Python Helm and ArgoCD. Candidates should also have exposure in Stakeholder management and team handling skills. Candidate should have excellent in written and verbal communication skills. Skills Required RoleDevops Engineer Industry TypeIT/ Computers - Software Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DEVOPS KUBERNETES TERRAFORM JAVA PYTHON Other Information Job CodeGO/JC/249/2025 Recruiter NameSheena Rakesh Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: ETL Talend Lead Location: Bangalore, Hyderabad, Chennai, Pune Work Mode: Hybrid Job Type: Full-Time Shift Timings: 2:00 - 11:00 PM Years Of Experience: 8 - 15 years ETL Development Lead: Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

THIS IS A LONG TERM CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER. Our Client is a Fortune 350 company that engages in the design, manufacturing, marketing, and service of semiconductor processing equipment. We are seeking an experienced High Performance Computing platform consultant to provide Support to India/Asia/EU region users and carry out platform enhancements and reliability improvement projects as aligned with HPC architect Minimum qualifications: Bachelor’s or Master’s degree in Computer Science or equivalent with 5+ years of experience in High Performance Computing technologies HPC Environment: Familiar with use of HPC – Ansys/Fluent over MPI, Helping users to tune their jobs in an HPC environment Linux administration Parallel file system (Eg. Gluster, Lustre, ZFS, Gluster, Luster, NFS, CIFS) MPI (OpenMPI, MPICH2, IntelMIP), Infiniband parallel computing Monitoring tools – Eg. Nagios Programming skills such as in Python would be nice to have, especially using MPI Experienced and hands on with Cloud technologies: Prefer using Azure and Terraform for VM creations and maintenance Effective communication skills (the resource would independently engage and address user requests and resolve incidents for global regions – Asia, EU included) Ability to work independently with minimal supervision Preferred Qualifications: Experience with ANSYS Products Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

• 4+ years in software development, with a focus on data-intensive applications, cloud solutions, and scalable data architectures. • Development experience in Go Programming Language (GoLang). • Amazon AWS experience (EC2, S3, SQS, SNS, Kinesis, ELB, Lambda). • Experience implementing and Using APIs and understanding of HTTP and REST architecture. • Experience implementing microservices and delivering to market. • Experience with both relational & NoSQL databases, writing Stored Procedures, Functions etc. • Experience with NoSQL databases like DynamoDB/DocumentDB/MongoDB. • Experience working in a CI/CD environment, and related tools/pipelines/processes. • Experience with terraform and infrastructure as code (IaC). Also experience in Okta, OIDC. Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Ghaziabad, Uttar Pradesh, India

On-site

Linkedin logo

Responsibilities As a Data Engineer, you will design, develop, and support data pipelines and related data products and platforms. Your primary responsibilities include designing and building data extraction, loading, and transformation pipelines across on-prem and cloud platforms. You will perform application impact assessments, requirements reviews, and develop work estimates. Additionally, you will develop test strategies and site reliability engineering measures for data products and solutions, participate in agile development & solution reviews, mentor junior Data Engineering Specialists, lead the resolution of critical operations issues, and perform technical data stewardship tasks, including metadata management, security, and privacy by design. Required Skills: ● Design, develop, and support data pipelines and related data products and platforms. ● Design and build data extraction, loading, and transformation pipelines and data products across on- prem and cloud platforms. ● Perform application impact assessments, requirements reviews, and develop work estimates. ● Develop test strategies and site reliability engineering measures for data products and solutions. ● Participate in agile development and solution reviews. ● Mentor junior Data Engineers. ● Lead the resolution of critical operations issues, including post-implementation reviews. ● Perform technical data stewardship tasks, including metadata management, security, and privacy by design. ● Design and build data extraction, loading, and transformation pipelines using Python and other GCP Data Technologies ● Demonstrate SQL and database proficiency in various data engineering tasks. ● Automate data workflows by setting up DAGs in tools like Control-M, Apache Airflow, and Prefect. ● Develop Unix scripts to support various data operations. ● Model data to support business intelligence and analytics initiatives. ● Utilize infrastructure-as-code tools such as Terraform, Puppet, and Ansible for deployment automation. ● Expertise in GCP data warehousing technologies, including BigQuery, Cloud SQL, Dataflow, Data Catalog, Cloud Composer, Google Cloud Storage, IAM, Compute Engine, Cloud Data Fusion and Dataproc (good to have). Qualifications: ● Bachelor's degree in Software Engineering, Computer Science, Business, Mathematics, or related field. ● 4+ years of data engineering experience. ● 2 years of data solution architecture and design experience. ● GCP Certified Data Engineer (preferred). Interested candidates can send their resumes to riyanshi@etelligens.in Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

About Zapier We're humans who simply think computers should do more work. At Zapier, we’re not just making software—we’re building a platform to help millions of businesses globally scale with automation and AI. Our mission is to make automation work for everyone by delivering products that delight our customers. You’ll collaborate with brilliant people, use the latest tools, and leverage the flexibility of remote work. Your work will directly fuel our customers’ success, and as they grow, so will you. Job Posted: March 28th, 2025 Location: India Hi there! We are seeking a talented Cloud Engineer (L3) to join our growing team at Zapier. As we continue to scale our product and grow our team, we’re looking for an experienced engineer to help drive automation, performance, and reliability in our cloud-based infrastructure. We know applying for and taking on a new job at any company requires a leap of faith. We want you to feel comfortable and excited to apply at Zapier. To help share a bit more about life at Zapier, here are a few resources in addition to the job description that can give you an inside look at what life is like at Zapier. Hopefully, you'll take the leap of faith and apply. Our Commitment to Applicants Culture and Values at Zapier Zapier Guide to Remote Work Zapier Code of Conduct Diversity and Inclusivity at Zapier About You Experience: You have at least 4 years of experience in cloud engineering, systems administration, or a related field. For this role, we are specifically targeting folks with 4+ years of professional experience. Cloud Knowledge: You have experience with cloud platforms such as AWS, GCP, or Azure. You understand how to leverage infrastructure as code tools and have learned best practices for reliability and observability. Coding Skills: You are proficient in at least one programming language such as Python, Go, or similar, and have experience with automation tools. Problem-Solving: You enjoy solving complex systems challenges and can improve performance and reliability. Communication: You are an effective communicator, capable of documenting processes and sharing knowledge with the team. Values: You align with Zapier’s values and thrive in a collaborative, remote work environment. AI Fluency: You’ve used AI tooling for work or personal use—or you are willing to dive in and learn fast. You explore new tools, workflows, and ideas to make things more efficient, and are eager to deepen your understanding of AI and use it regularly. Responsibilities Infrastructure Management: Design and deploy AWS infrastructure using infrastructure as code tools like Terraform and Helm. Kubernetes and Serverless: Contribute to the management and governance of our Kubernetes clusters (EKS) and serverless functions (Lambda). Tool Evaluation: Evaluate new tools and recommend technologies to improve our infrastructure. Collaboration: Partner with teams to solve infrastructure and design problems, ensuring scalable solutions. Service Integration: Build services to integrate systems, process high-traffic workloads, and perform critical migrations. Reliability Engineering: Apply site reliability engineering principles to improve application reliability and develop internal tools. Feature Development: Build new features and services to support our teams and understand customer needs. Incident Response: Collaborate with the team to solve issues, learn from failures, and build resilient systems. On-Call: Participate in business hours on-call support. What We’re Doing Routing service: We’ve developed an in-house routing service to create a single source of truth and optimize our routing management. Terraform rework: We’re going to clean up and optimize our Terraform setup, to streamline the infrastructure management and enable faster and more reliable changes. Regarding the salary range for this role, we are currently hiring for an L3 SRE positions. The salary range for an L3 SRE is between ₹4,900,000 and ₹6,000,000 INR. Application Review Please note that to ensure our hiring team can review every application, we may remove this posting once we reach a certain number of applicants that allows us to respond in a timely manner. We will be able to repost the position once our review cycles have concluded. Thank you so much for your understanding! How To Apply At Zapier, we believe that diverse perspectives and experiences make us better, which is why we have a non-standard application process designed to promote inclusion and equity. We're looking for the best fit for each of our roles, regardless of the type of companies in your background, so we encourage you to apply even if your skills and experiences don’t exactly match the job description. All we ask is that you answer a few in-depth questions in our application that would typically be asked at the start of an interview process. This helps speed things up by letting us get to know you and your skillset a bit better right out of the gate. Please be sure to answer each question; the resume and CV fields are optional. Education is not a requirement for our roles; however, if you receive an offer, you will need to include your most recent educational experience as part of our background check process. After you apply, you are going to hear back from us—even if we don’t see an immediate fit with our team. In fact, throughout the process, we strive to never go more than seven days without letting you know the status of your application. We know we’ll make mistakes from time to time, so if you ever have questions about where you stand or about the process, just ask your recruiter! Zapier is an equal-opportunity employer and we're excited to work with talented and empathetic people of all identities. Zapier does not discriminate based on someone's identity in any aspect of hiring or employment as required by law and in line with our commitment to Diversity, Inclusion, Belonging and Equity. Our code of conduct provides a beacon for the kind of company we strive to be, and we celebrate our differences because those differences are what allow us to make a product that serves a global user base. Zapier will consider all qualified applicants, including those with criminal histories, consistent with applicable laws. Zapier prioritizes the security of our customers' information and is dedicated to adhering to all applicable data privacy laws. You can review our privacy policy here. Zapier is committed to inclusion. As part of this commitment, Zapier welcomes applications from individuals with disabilities and will work to provide reasonable accommodations. If reasonable accommodations are needed to participate in the job application or interview process, please contact jobs@zapier.com. Application Deadline The anticipated application window is 30 days from the date job is posted, unless the number of applicants requires it to close sooner or later, or if the position is filled. Even though we’re an all-remote company, we still need to be thoughtful about where we have Zapiens working. Check out this resource for a list of countries where we currently cannot have Zapiens permanently working. Show more Show less

Posted 5 days ago

Apply

3.0 - 7.0 years

0 Lacs

India

Remote

Linkedin logo

Note: As part of the application process, send a quick 1-minute video recording of your experience building applications from 0 to 1 to akshay@ploy.club . Applicants with a video recording will have a significant advantage. (Only 5% submit video) About you: We are looking for an exceptional full-stack engineer. You will be expected to lead and take huge responsibility for building features. You should be able to break through barriers with research and out-of-the-box thinking. Having experience in building applications from 0 to 1 for web and mobile is essential. Experience: 3-7 years (Preferred) [Mid to Senior] Salary: 16 - 26 Lakhs based on experience About the Role: We are seeking a dynamic and talented Full-Stack Developer with expertise in React.js, Node.js, Express.js, DevOps, MySQL or Postgres, Android, and iOS. The ideal candidate will have a strong background in both frontend and backend development, experience deploying and managing cloud-based systems, and a passion for delivering high-quality, scalable, and robust applications. Having coded in WindSurf / Cursor is an advantage. Key Responsibilities: Frontend Development : Design and implement responsive user interfaces using React.js, ensuring a seamless user experience. Backend Development: Build and maintain scalable server-side applications and APIs using Node.js. Mobile App Development: Develop and maintain Android and iOS applications. Database Management: Design and optimise database schemas, queries, and integrations using Postgres or MySQL. DevOps: Automate deployment pipelines and manage CI/CD processes. Monitor and optimise application performance and cloud infrastructure. Ensure system reliability, scalability, and security. Code Quality: Write clean, maintainable, and well-documented code, adhering to best practices and industry standards with version controls. Required Skills: Frontend: Proficiency in React.js Experience with HTML5, Tailwind CSS, and responsive design principles. Backend: Strong knowledge of Node.js, Express.js with experience in building RESTful APIs and microservices. Familiarity with frameworks like Next.js. Mobile Development: Experience in developing progressive web applications or hybrid applications. Preferably have worked with React native or Flutter. Database: Expertise in MySQL or Postgres and its ecosystem (e.g., aggregation framework, replica sets). DevOps: Hands-on experience with tools like Docker, Kubernetes, Jenkins, and Terraform. Familiarity with cloud platforms (AWS, Azure, or Google Cloud). Experience with monitoring and logging tools (Prometheus, ELK stack, etc.). Version Control: Proficiency with Git and collaboration platforms like GitHub or GitLab. Preferred Qualifications: Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field. Familiarity with Agile methodologies and tools like JIRA. Knowledge of security best practices for web and mobile applications. What We Offer: Salary 16 to 26 Lakhs based on experience Opportunity to work on cutting-edge technologies and challenging projects. Collaborative and inclusive work environment. Job Type: Full-time Experience: 3 - 7 years (Preferred) Work Location: Fully remote, prefer Hyderabad Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Who we are We're a leading, global security authority that's disrupting our own category. Our encryption is trusted by the major ecommerce brands, the world's largest companies, the major cloud providers, entire country financial systems, entire internets of things and even down to the little things like surgically embedded pacemakers. We help companies put trust - an abstract idea - to work. That's digital trust for the real world. Job summary As a DevOps Engineer, you will play a pivotal role in designing, implementing, and maintaining our infrastructure and deployment processes. You will collaborate closely with our development, operations, and security teams to ensure seamless integration of code releases, infrastructure automation, and continuous improvement of our DevOps practices. This role places a strong emphasis on infrastructure as code with Terraform, including module design, remote state management, policy enforcement, and CI/CD integration. You will manage authentication via Auth0, maintain secure network and identity configurations using AWS IAM and Security Groups, and oversee the lifecycle and upgrade management of AWS RDS and MSK clusters. Additional responsibilities include managing vulnerability remediation, containerized deployments via Docker, and orchestrating production workloads using AWS ECS and Fargate. What you will do Design, build, and maintain scalable, reliable, and secure infrastructure solutions on cloud platforms such as AWS, Azure, or GCP. Implement and manage continuous integration and continuous deployment (CI/CD) pipelines for efficient and automated software delivery. Develop and maintain infrastructure as code (IaC) — with a primary focus on Terraform — including building reusable, modular, and parameterized modules for scalable infrastructure. Securely manage Terraform state using remote backends (e.g., S3 with DynamoDB locks) and establish best practices for drift detection and resolution. Integrate Terraform into CI/CD pipelines with automated plan, apply, and policy-check gating. Conduct testing and validation of Terraform code using tools such as Terratest, Checkov, or equivalent frameworks. Design and manage network infrastructure, including VPCs, subnets, routing, NAT gateways, and load balancers. Configure and manage AWS IAM roles, policies, and Security Groups to enforce least-privilege access control and secure application environments. Administer and maintain Auth0 for user authentication and authorization, including rule scripting, tenant settings, and integration with identity providers. Build and manage containerized applications using Docker, deployed through AWS ECS and Fargate for scalable and cost-effective orchestration. Implement vulnerability management workflows, including image scanning, patching, dependency management, and CI-integrated security controls. Manage RDS and MSK infrastructure, including lifecycle and version upgrades, high availability setup, and performance tuning. Monitor system health, performance, and capacity using tools like Prometheus, ELK, or Splunk; proactively resolve bottlenecks and incidents. What you will have Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. 6+ years in DevOps or similar role, with strong experience in infrastructure architecture and automation. Advanced proficiency in Terraform, including module creation, backend management, workspaces, and integration with version control and CI/CD. Experience with remote state management using S3 and DynamoDB, and implementing Terraform policy-as-code with OPA/Sentinel. Familiarity with Terraform testing/validation tools such as Terratest, InSpec, or Checkov. Strong background in cloud networking, VPC design, DNS, and ingress/egress control. Proficient with AWS IAM, Security Groups, EC2, RDS, S3, Lambda, MSK, and ECS/Fargate. Hands-on experience with Auth0 or equivalent identity management platforms. Proficient in container technologies like Docker, with production deployments via ECS/Fargate. Solid experience in vulnerability and compliance management across the infrastructure lifecycle. Skilled in scripting (Python, Bash, PowerShell) for automation and tooling development. Experience in monitoring/logging using Prometheus, ELK stack, Grafana, or Splunk. Excellent troubleshooting skills in cloud-native and distributed systems. Effective communicator and cross-functional collaborator in Agile/Scrum environments. Benefits Generous time off policies Top Shelf Benefits Education, wellness and lifestyle support Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

Linkedin logo

Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less

Posted 5 days ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies