Jobs
Interviews

17543 Terraform Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

You have an opportunity to join Altimetrik as an ML-Ops Team Lead based in Chennai. With 6 to 9 years of experience, you should have extensive knowledge in Python, AWS, and customer-facing operations. Your role will involve monitoring, cost analysis, and troubleshooting production issues. In addition, you should have at least three years of experience as a team lead, which includes interviewing and project management. Preference will be given to candidates with Terraform and Data Science experience. You should also be available for meetings with the US team from 9-11 AM MT. This role presents an exciting opportunity to lead a dynamic team in a fast-paced environment at Altimetrik.,

Posted 1 week ago

Apply

5.0 - 12.0 years

0 Lacs

karnataka

On-site

The ideal candidate should have a minimum of 5-12 years of experience in setting up CI/CD pipelines using GitHub actions, managing AKS cluster or Kubernetes, and working with Terraform, BICEP, or ARM templates. The candidate will be responsible for streamlining the deployment processes and ensuring the reliability of the infrastructure. This position is based in either Hyderabad or Bangalore and follows a hybrid work mode. The interview process includes a virtual first round and a face-to-face second round. The notice period for this role is up to 45 days. If you are interested in applying for this position, please send your updated resume to meeta.padaya@ltimindtree.com. Include the following details in your email: - Total Experience: - Relevant Experience in Azure DevOps: - GitHub experience: - Experience with AKS Cluster: - Experience with Terraform or ARM templates: - Availability for the 2nd face-to-face round (1st round will be Virtual): - Current Company: - Current CTC: - Expected CTC: - Notice Period (If serving, please mention Last Working Day): - Current / Preferred Location:,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Cloud DevOps Engineer at Amdocs, you will be responsible for the design, development, modification, debugging, and maintenance of software systems. Your key responsibilities will include production support activities such as monitoring, triaging, root cause analysis, and reporting production incidents. You will also investigate issues reported by clients, manage Reportdb servers, provide access management to new users on Reportdb, and work with the stability team to enhance Watchtower alerts. Additionally, you will be involved in working with cronjobs and scripts used to dump and restore from ProdDb, handling non-production deployments in Azure DevOps as per requests, and creating Kibana dashboards. Your technical skills should include experience in AWS DevOps, EKS, EMR, strong proficiency in Docker and Dockerhub, expertise in Terraform and Ansible, and good exposure to Git and Bitbucket. It would be advantageous if you have knowledge and experience in Kubernetes, Docker, and other cloud-related technologies. Cloud experience working with VMs and Azure storage, as well as sound data engineering experience, would be considered a plus. In terms of behavioral skills, you should possess good communication abilities, strong problem-solving skills, and the ability to build relationships with clients, operational managers, and colleagues. Furthermore, you should be able to adapt, prioritize, work under pressure, and meet deadlines. Your innovative approach, presentation skills, and willingness to work in shifts or extended hours will be valuable assets in this role. By joining our team, you will be challenged to design and develop new software applications and have the opportunity to work in a growing organization with vast opportunities for personal growth.,

Posted 1 week ago

Apply

12.0 - 18.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have a minimum of 12 years to 18 years of experience. Your technical expertise should include architecting solutions for large global enterprises, knowledge of virtualization technologies like VMware, Hyper-V, and Nutanix, familiarity with the software development cycle (DevOps), cloud technologies, and microservices. You should also possess excellent troubleshooting skills on computer systems in both Windows and Linux environments. Your hands-on experience should cover various operating systems such as Suse Linux, Redhat Linux, CentOS, Ubuntu Linux for Linux, and Windows Server 2012/2016/2019 for Windows. Additionally, experience in HPC cluster management, virtual machines, and other related areas is required. Working knowledge is expected in virtualization technologies like VMware, Nutanix, KVM, Xen, as well as cloud services such as IaaS, PaaS, and SaaS. You should be familiar with orchestration, containerization, private and public cloud services (AWS and Azure), automation tools like Terraform, Docker, Kubernetes, CI/CD, self-service platforms (e.g., Morpheus), container registry, JFrog, infrastructure setup for tools like JFrog, GIT, GitHub, writing Ansible and Python scripts for deployment automation, IT application architecture, design methodologies, and various networking concepts like AD, DFS, NFS, DNS, DHCP, TCP/IP, VPN, etc. Having experience in automating the build and release process using Jenkins, Chef, and Ansible as part of the CI/CD pipeline is a plus. Project management skills including core planning, scheduling, purchasing, and logistics would also be advantageous for this role.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for authoring and maintaining Terraform scripts for infrastructure provisioning and management. Your role will involve designing, implementing, and managing CI/CD pipelines using GitHub Actions. Utilizing AWS CLI for managing AWS resources and services will be a key part of your daily tasks. Collaboration with development and operations teams is essential to ensure seamless integration and deployment of applications. Troubleshooting and resolving issues related to deployment pipelines and infrastructure will also be within your scope. It is important to ensure that best practices in security, scalability, and performance are consistently followed. Documenting processes, configurations, and procedures for future reference will be part of your routine responsibilities. Your proficiency in Terraform for infrastructure as code, experience with GitHub Actions for CI/CD pipeline setup and management, and strong knowledge of AWS CLI for managing AWS services are crucial requirements for this role. Working knowledge of at least one deployment framework such as GitHub Actions, Octopus Deploy, or Bamboo is preferred. Familiarity with version control systems, particularly Git, and understanding of cloud infrastructure and services, particularly AWS, are necessary for success in this position. Strong problem-solving skills, attention to detail, excellent communication, and collaboration skills are qualities that will help you excel in this role. Preferred qualifications include certification in AWS (e.g., AWS Certified Solutions Architect, AWS Certified DevOps Engineer), experience with other CI/CD tools and frameworks, knowledge of containerization technologies such as Docker and Kubernetes, and familiarity with monitoring and logging tools. With 8-11 years of experience, your primary skill in DevOps Engineering will be put to good use in this role. Additional skills in GIT / GITHUB, AWS - EKS, AWS - CloudFormation, AWS-Apps, AWS-Infra, DevOps Engineering, AWS CodeDeploy, AWS CodeBuild, AWS CodePipeline, AWS Config, and AWS CodeCommit will also be beneficial for your success.,

Posted 1 week ago

Apply

814.0 years

20 - 25 Lacs

Noida, Uttar Pradesh, India

On-site

Skills: Java, Spring Boot, Microservices, AWS, Kafka, PostgreSQL, MYSQL, Profile : Principal Java Engineer Location: Noida Department: Engineering / Technology Type: Full-Time (5.5 Days Working) Experience: 814 Years About The Role We are seeking a Principal Java Engineer with deep technical expertise and leadership ability to architect, design, and drive the development of scalable, high-performance backend systems. You will play a key role in defining system architecture, mentoring engineering teams, and ensuring the delivery of robust and secure software platforms. In this role, you will collaborate closely with engineering leadership, product managers, and DevOps to implement best practices, modernize existing systems, and shape our technical roadmap. Key Responsibilities Define and lead the architectural vision for Java-based backend platforms. Design and develop robust, secure, and scalable microservices and APIs. Provide technical leadership to backend engineering teams and mentor senior developers. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Lead modernization efforts such as monolith to microservices migration and cloud-native transformations. Conduct system performance tuning, code reviews, and design audits to maintain high code quality. Stay updated with emerging technologies and evaluate their potential adoption. Contribute to the technology roadmap and strategic planning. Required Skills And Qualifications 8+ years of hands-on experience in Java-based software development. Expertise in Spring Boot, Spring Cloud, and RESTful API development. Strong knowledge of microservices architecture, event-driven systems, and system design principles. Experience with relational and NoSQL databases such as PostgreSQL, MySQL, MongoDB, Redis. Hands-on experience with cloud platforms (AWS, Azure, or GCP), and containerization tools (Docker, Kubernetes). Proficiency in CI/CD, Git, Jenkins, and test automation practices. Solid understanding of performance tuning, system security, and high-availability architectures. Proven track record of leading engineering teams and managing large-scale backend projects. Strong communication, analytical, and leadership skills. Preferred Qualifications Bachelors degree in Computer Science, Engineering, or a related field (B.E., B.Tech, BCA, B.Sc-CS). Masters degree (M.Tech, MCA, M.Sc) is preferred but not mandatory. Experience with Kafka, RabbitMQ, or similar messaging systems. Familiarity with DevOps tools and Infrastructure-as-Code (Terraform, Ansible). Background in fintech, e-commerce, or other high-transaction domains. Contributions to open-source projects, technical blogs, or tech talks are a strong plus. Why Join Us? Work on cutting-edge technology in a fast-paced, product-driven environment. Lead innovation and influence system architecture at scale. Collaborate with experienced professionals in a culture of technical excellence. 5.5-day work week that promotes focus and delivery. Competitive compensation and growth opportunities.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

navi mumbai, maharashtra

On-site

The Lead Cloud Database Administrator (DBA) role involves overseeing the architecture, implementation, and maintenance of cloud-based database systems within a shared service industry. Leading a team of DBAs, you will be responsible for ensuring the availability, performance, and security of databases while promoting the adoption of best practices in database management. Your responsibilities will include designing, implementing, and managing cloud-based database solutions while ensuring alignment with business objectives and industry standards. You will lead the transition of on-premises databases to cloud platforms and provide leadership and mentorship to a team of database administrators. Task assignment, performance goal setting, and performance reviews will also fall under your purview as you foster a collaborative and innovative team environment. Monitoring and optimizing database performance to ensure high availability and reliability will be a key focus area. You will implement strategies for database tuning and query optimization, conduct regular performance reviews, and engage in capacity planning activities. Furthermore, you will be responsible for ensuring database security and compliance with industry regulations, implementing security policies and procedures, and conducting security audits and vulnerability assessments. In terms of backup and recovery, you will develop and maintain database backup and recovery plans, oversee regular backups, and test recovery procedures. Disaster recovery planning and execution will also be managed by you. Automation of routine database administration tasks and scripting for maintenance, monitoring, and management will be part of your responsibilities. Additionally, you will implement Infrastructure as Code (IaC) practices using Terraform for database deployments. Collaboration and communication are crucial aspects of this role, as you will work closely with the Cloud Team, Security Team, and other stakeholders. You will participate in project planning, provide technical expertise, and communicate database-related issues and solutions to non-technical stakeholders. Maintaining comprehensive documentation for database systems and processes, generating regular reports on database performance, usage, and incidents, and ensuring documentation is up to date and accessible to relevant stakeholders are also key components of this role. Qualifications: Education: BE / B.Tech (Computer Science/IT) Experience: - 6+ years of experience in multiple database administration (Oracle, Postgres, MySQL, MS SQL, Dynamo DB, Redis) - 3+ years of experience with cloud databases (e.g., AWS RDS, Azure SQL Database, Google Cloud SQL) - Proven experience in a leadership role Technical Skills: - Proficiency in SQL and NoSQL databases - Experience with database performance tuning and optimization - Strong understanding of database security and compliance requirements,

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Company TSC Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications 1 We are seeking a highly skilled L3 Cloud Engineer specializing in AWS Cloud Service Provider (CSP) environments. This role requires deep expertise in AWS cloud infrastructure, automation, security, performance optimization, and troubleshooting. You will be responsible for designing, implementing, and maintaining scalable and highly available AWS solutions, as well as serving as the final escalation point for complex cloud-related incidents. 2 As a senior cloud engineer, you will also work on cloud migration projects, automation strategies, and infrastructure-as-code (IaC) deployments while collaborating with cross-functional teams to ensure best practices and security compliance. Major Duties & Responsibilities AWS Cloud Infrastructure Design & Operations: Architect, deploy, and manage highly available AWS cloud environments. Optimize and maintain AWS services such as EC2, S3, RDS, Lambda, Route 53, VPC, ELB, and CloudFront. Implement AWS Well-Architected Framework best practices for performance, cost efficiency, and security. Manage multi-account AWS environments using AWS Organizations and Control Tower. Optimize networking and connectivity between AWS services and on-premise/hybrid infrastructure. Automation & Infrastructure As Code (IaC) Automate cloud infrastructure deployment using Terraform, AWS CloudFormation, and Ansible. Utilize AWS Systems Manager (SSM), AWS Lambda, and Step Functions for automation and orchestration. Develop CI/CD pipelines using AWS CodePipeline, Jenkins, GitHub Actions, or GitLab CI/CD. Automate patch management, compliance enforcement, and resource provisioning in AWS. Security, Compliance & Governance Implement AWS security best practices, including IAM roles, least privilege policies, security groups, and KMS encryption. Monitor security threats using AWS Security Hub, GuardDuty, and CloudTrail. Ensure compliance with ISO 27001, NIST, CIS, SOC2, HIPAA, and GDPR standards. Implement AWS Backup, disaster recovery (DR), and business continuity strategies. Monitoring, Troubleshooting & Performance Optimization Act as the final escalation point for AWS cloud-related incidents. Monitor cloud infrastructure using AWS CloudWatch, CloudTrail, AWS Config, and third-party tools (Datadog, Splunk, Prometheus, Grafana, etc.). Troubleshoot network, compute, storage, and security issues in AWS environments. Perform root cause analysis (RCA) and implement permanent fixes for AWS-related outages. Cloud Migration & Optimization Lead cloud migration projects from on-premises or other cloud providers to AWS. Optimize AWS resource allocation and cost management using AWS Cost Explorer and Savings Plans. Implement hybrid cloud solutions with AWS Direct Connect, VPN, and AWS Outposts. Collaboration & Technical Leadership Work closely with DevOps, security, networking, and application teams to enhance AWS cloud solutions. Mentor and provide technical guidance to L1 and L2 engineers. Create and maintain technical documentation, SOPs, and knowledge bases. Participate in design and architecture reviews for AWS cloud environments.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About The Company TSC Redefines Connectivity with Innovation and IntelligenceDriving the next level of intelligence powered by Cloud, Mobility, Internet of Things, Collaboration, Security, Media services and Network services, we at Tata Communications are envisaging a New World of Communications Linux Administrator L3 Engineer ( IT Operations & Infrastructure ) Employment Type On-roll Reporting Manager Direct reports Role Purpose Linux Administrator L3 Engineer - IT Operations & Infrastructure Key Responsibilities / Accountabilities We are seeking an experienced Linux Administrator Engineer (L3) to lead and manage Linux-based infrastructure across on-premises and cloud environments. This role requires expertise in advanced Linux system administration, performance tuning, security hardening, automation, high availability (HA) configurations, and troubleshooting complex issues. The ideal candidate should have deep knowledge of RHEL, CentOS, Ubuntu, SUSE, Oracle Linux, along with cloud Linux workloads (AWS, GCP, Azure, OCI), containerization (Docker, Kubernetes, OpenShift), and automation (Ansible, Terraform, Python, Bash). Major Duties & Responsibilities Linux Infrastructure Design & Management: Architect, deploy, and maintain enterprise-grade Linux environments (RHEL, CentOS, Ubuntu, SUSE, Oracle Linux). Design and implement scalable, highly available, and secure Linux-based systems. Perform advanced troubleshooting, root cause analysis (RCA), and performance tuning. Ensure system reliability, patching, and security updates for production servers. Cloud & Virtualization Administration: Optimize cloud-based Linux instances, auto-scaling, and cost management strategies. Work with VMware, KVM, Hyper-V, OpenStack for on-prem virtualization. Automation & Configuration Management: Automate Linux system administration tasks using Ansible, Terraform, Bash, Python, PowerShell. Implement Infrastructure as Code (IaC) to automate provisioning and configuration. Develop cron jobs, systemd services, and log rotation scripts. Security & Compliance: Implement Linux system hardening (CIS benchmarks, SELinux, AppArmor, PAM, SSH security). Configure firewall rules (iptables, nftables, firewalld), VPN, and access control policies. Ensure compliance with ISO 27001, PCI-DSS, HIPAA, and NIST security standards. Conduct vulnerability scanning, penetration testing, and security audits. Networking & High Availability (HA) Solutions: Configure and manage DNS, DHCP, NFS, iSCSI, SAN, CIFS, VLANs, and network bonding. Deploy Linux clusters, failover setups, and high-availability solutions (Pacemaker, Corosync, DRBD, Ceph, GlusterFS). Work with load balancing solutions (HAProxy, Nginx, F5, Cloud Load Balancers). Monitoring & Performance Optimization: Set up real-time monitoring tools (Prometheus, Grafana, Nagios, Zabbix, ELK, Site 24x7). Optimize CPU, memory, disk IO, and network performance for Linux workloads. Analyze and resolve kernel panics, memory leaks, and slow system responses. Backup & Disaster Recovery: Design and implement Linux backup & disaster recovery strategies (CommVault, Veeam, Rsync, AWS Backup, GCP Backup & DR, OCI Vaults). Perform snapshot-based recovery, failover testing, and disaster recovery planning. Collaboration & Documentation: Mentor L1 and L2 engineers, provide escalation support for critical incidents. Maintain technical documentation, SOPs, and knowledge base articles. Assist in capacity planning, forecasting, and IT infrastructure roadmaps. Required Knowledge, Skills And Abilities Expert-level knowledge of Linux OS administration, troubleshooting, and performance tuning. Strong hands-on expertise in server patching, automation, and security best practices. Deep understanding of cloud platforms (AWS, GCP, Azure, OCI) and virtualization (VMware, KVM, Hyper-V, OpenStack). Advanced networking skills in firewalls, VLANs, VPN, DNS, and routing. Proficiency in scripting (Bash, Python, Ansible, Terraform, PowerShell). Experience with high-availability architectures and clustering solutions. Strong problem-solving, analytical, and troubleshooting skills for mission-critical environments. Preferred Additional Skills And Abilities Experience with Linux-based Kubernetes clusters (EKS, AKS, GKE, OpenShift, Rancher). Understanding of CI/CD pipelines and DevOps tools (Jenkins, Git, GitLab, ArgoCD, Helm). Knowledge of big data, logging, and analytics tools (Splunk, ELK Stack, Kafka, Hadoop). Familiarity with database management on Linux (MySQL, PostgreSQL, MariaDB, MongoDB, Redis). Qualifications And Experience Following are the key skills and experience expected out of the candidate Bachelors in Communications / Computer Science OR Software Engineering OR related technical degree OR Experience 7+ years of experience in Linux administration and enterprise infrastructure. Proven track record in designing, implementing, and optimizing Linux environments. Experience with multi-cloud Linux workloads, scripting, security, and high availability. Certifications (Preferred But Not Mandatory) Red Hat Certified Engineer (RHCE) or RHCSA LPIC-3 (Linux Professional Institute Certification Level 3)

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

dehradun, uttarakhand

On-site

As a Cloud Backend Developer Intern, you will join a dynamic and fast-paced technology team with a focus on building cutting-edge cloud-based solutions. You will have the opportunity to work alongside experienced engineers to design, develop, and deploy backend services on AWS. This internship is an exciting chance for a motivated individual to gain hands-on experience in cloud computing and backend development. You will collaborate with the development team to design, implement, and maintain backend services for web and mobile applications. Additionally, you will assist in developing and deploying cloud infrastructure using AWS services like Lambda, EC2, S3, API Gateway, and RDS. Your responsibilities will include writing, testing, and maintaining scalable, reusable, and efficient code, as well as working on APIs and microservices using technologies such as Node.js, Python, or Java. You will also play a role in optimizing performance, scalability, and security of cloud-native applications and contribute to DevOps pipelines, including CI/CD, monitoring, and logging systems. Furthermore, you will help troubleshoot and resolve issues in the development and production environments, while learning and implementing cloud best practices related to security, cost optimization, and performance tuning. Participation in team meetings, code reviews, and technical discussions will also be expected. To qualify for this internship, you should be currently pursuing or recently completed a degree in Computer Science, Information Technology, Software Engineering, or a related field. You should have a basic understanding of core AWS services such as EC2, S3, Lambda, API Gateway, and RDS, along with knowledge of AWS IAM roles and security best practices. Proficiency in one or more backend programming languages such as Python, Node.js, Java, or Go is required, as well as a basic understanding of RESTful APIs and web services. Familiarity with relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., DynamoDB), experience with Git and Git-based workflows, and basic knowledge of CI/CD pipelines and containerization technologies like Docker are also necessary. Strong analytical and problem-solving skills, along with the ability to work collaboratively in a team environment and communicate technical ideas effectively, are key attributes for this role. Nice to have qualifications include hands-on experience with serverless architecture (e.g., AWS Lambda, API Gateway), knowledge of monitoring tools such as AWS CloudWatch and logging frameworks, familiarity with infrastructure as code tools like AWS CloudFormation or Terraform, awareness of cloud security practices, understanding of microservices architecture, and basic knowledge of frontend technologies (e.g., React, Angular) for full-stack experience. Benefits of this internship include gaining hands-on experience with AWS cloud technologies and backend development, mentorship from experienced cloud engineers and developers, exposure to cutting-edge cloud infrastructure and best practices, the potential for a full-time role based on performance, and flexible work hours in a fast-paced tech environment.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for utilizing the best DevOps practices to optimize the software development process. This includes system administration, design, construction, and operation of container platforms such as Kubernetes, as well as expertise in container technologies like Docker and their management systems. Your role will also involve working with cloud-based monitoring, alerting, and observability solutions, and possessing in-depth knowledge of developer workflows with Git. Additionally, you will be expected to document processes, procedures, and best practices, and demonstrate strong troubleshooting and problem-solving skills. Your proficiency in Network Fundamentals, Firewalls, and ingress/egress Patterns, as well as experience in security configuration Management and DevSecOps, will be crucial for this position. You should have hands-on experience with Linux, CI/CD Tools (Pipelines, GitHub, GitHub Actions/Jenkins), and Configuration Management/Infrastructure as Code tools like CloudFormation, Terraform, and Cloud technologies such as VMware, AWS, and Azure. Your responsibilities will also include build automation, deployment configuration, and enabling product automation scripts to run in CI. You will be required to design, develop, integrate, and deploy CI/CD pipelines, collaborate closely with developers, project managers, and other teams to analyze requirements, and resolve software issues. Moreover, your ability to lead the development of infrastructure using open-source technologies like Elasticsearch, Grafana, and homegrown tools such as React and Python will be highly valued. Minimum Qualifications: - Graduate/master's degree in computer science, Engineering, or related discipline - 5 to 10 years of overall DevOps/Related experience - Good written and verbal communication skills - Ability to manage and prioritize multiple tasks while working both independently and within a team - Knowledge of software test practices, software engineering, and Cloud Technologies discipline - Knowledge/Working experience with Static Code Analysis, License Check Tools, and other Development Process Improvement Tools Desired Qualifications: - Minimum 4 years of working experience in AWS, Kubernetes, Helm, Docker-related technologies - Providing visibility into cloud spending and usage across the organization - Generating and interpreting reports on cloud expenditure, resource utilization, and usage optimization - Network Fundamentals: AWS VPC, AWS VPN, Firewalls, and ingress/egress Patterns - Knowledge/Experience with embedded Linux and RTOS (e.g. ThreadX, FreeRTOS) development on ARM based projects - Domain Knowledge on Cellular wireless and WiFi is an asset - Knowledge of distributed systems, networking, AMQP/MQTT, Linux, cloud security, and Python.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

You are looking for a Python Developer with expertise in Django and Stackstorm to join your team in Pune, India in a Long Term Contract position. The ideal candidate should have a minimum of 6 years of experience in Python development focusing on web applications. Your primary skills should include proficiency in Python, Oracle DB/MySQL/Postgres, Java, Stackstorm, Django, Ansible, Terraform, and Docker & Kubernetes. Additionally, you should have a solid understanding of database systems such as PostgreSQL, MySQL, and MongoDB. Experience with DevOps practices and tools, as well as familiarity with containerization technologies like Kubernetes and Docker, is highly desired. As a qualified candidate, you should hold a Bachelor's degree in Computer Science, Engineering, or a related field. You should demonstrate excellent communication skills and the ability to work collaboratively in a team environment. If this opportunity excites you and you meet the qualifications mentioned above, we would love to hear from you. Please send your resume to prachi@iitjobs.com. Don't forget to ask about our awesome referral award of INR 50000. For global opportunities, visit www.iitjobs.com.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

We are seeking .NET Core developers in Hyderabad who possess expertise in SDLC, Unit testing, Design patterns, Multithreading, Microservices, CICD Pipelines, and Azure Cloud. The position is available for both Developers and Leads. As a .NET Core Architect, you will be responsible for migrating critical business functionality to a more robust design to ensure seamless integration with the platform. You will play a key role in a new engineering team, making technical and architectural decisions, and collaborating with remote teams. Your role will involve collaborating with technical product owners, architects, and software engineers to rapidly build, test, and deploy code for a redesigned set of foundational core models and services. Additionally, you will explore new technologies and industry trends to enhance our products. The ideal candidate should have a BS or MS in Computer Science or a related field, or equivalent practical experience. Proficiency in .NET Core, C# or similar object-oriented languages and frameworks is essential. Familiarity with microservice-based APIs (REST, GraphQL), data structures and management (SQL, Kafka, JSON, NoSQL, S3/Azure blob storage), modern CI/CD pipelines and tooling (Jenkins, CircleCI, Github Actions, terraform) are advantageous. Experience with HTML/CSS/JS, modern SPA frameworks (React, Vue.js), SCRUM, AGILE development process, and cloud-based environments (Azure, AWS, Google Cloud) is beneficial. Architectural experience and hands-on programming skills are also required. We offer a competitive salary and benefits package, with a focus on talent development, quarterly promotion cycles, company-sponsored higher education, and certifications. You will have the opportunity to work with cutting-edge technologies and participate in employee engagement initiatives. Our inclusive work environment values diversity and provides hybrid work options and flexible hours. We are committed to creating a supportive workplace for all employees, including those with disabilities. If you have specific requirements, please inform us during the application process or at any time during your employment. Persistent Ltd. is an Equal Opportunity Employer that promotes diversity and inclusion in the workplace. We encourage applications from qualified individuals of all backgrounds and abilities. Join us to accelerate your growth, make a positive impact using the latest technologies, enjoy collaborative innovation, and unlock global opportunities to work and learn with the best in the industry. Let's unleash your full potential at Persistent.,

Posted 1 week ago

Apply

3.0 - 31.0 years

3 - 6 Lacs

Gaya

On-site

Now, we’re looking for a DevOps Engineer to help scale our infrastructure and optimize performance for millions of users. 🚀 What You’ll Do (Key Responsibilities) 🔹 CI/CD & Automation: Implement, manage, and optimize CI/CD pipelines using AWS CodePipeline, GitHub Actions, or Jenkins. Automate deployment processes to improve efficiency and reduce downtime. 🔹 Infrastructure Management: Use Terraform, Ansible, Chef, Puppet, or Pulumi to manage infrastructure as code. Deploy and maintain Dockerized applications on Kubernetes clusters for scalability. 🔹 Cloud & Security: Work extensively with AWS (Preferred) or other cloud platforms to build and maintain cloud infrastructure. Optimize cloud costs and ensure security best practices are in place. 🔹 Monitoring & Troubleshooting: Set up and manage monitoring tools like CloudWatch, Prometheus, Datadog, New Relic, or Grafana to track system performance and uptime. Proactively identify and resolve infrastructure-related issues. 🔹 Scripting & Automation: Use Python or Bash scripting to automate repetitive DevOps tasks. Build internal tools for system health monitoring, logging, and debugging. What We’re Looking For (Must-Have Skills) ✅ Version Control: Proficiency in Git (GitLab / GitHub / Bitbucket) ✅ CI/CD Tools: Hands-on experience with AWS CodePipeline, GitHub Actions, or Jenkins ✅ Infrastructure as Code: Strong knowledge of Terraform, Ansible, Chef, or Pulumi ✅ Containerization & Orchestration: Experience with Docker & Kubernetes ✅ Cloud Expertise: Hands-on experience with AWS (Preferred) or other cloud providers ✅ Monitoring & Alerting: Familiarity with CloudWatch, Prometheus, Datadog, or Grafana ✅ Scripting Knowledge: Python or Bash for automation Bonus Skills (Good to Have, Not Mandatory) ➕ AWS Certifications: Solutions Architect, DevOps Engineer, Security, Networking ➕ Experience with Microsoft/Linux/F5 Technologies ➕ Hands-on knowledge of Database servers

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

About our Client: Our client is a global fintech disruptor revolutionizing online trading with state-of-the-art trading technology, multi-asset platforms, and a mission to democratize financial markets worldwide. With a presence across continents, including the Middle East, North America, Europe, Africa, and the Asia Pacific regions, and a team of 400+, they are redefining what is possible in the world of digital finance. The Opportunity: Join our trailblazing team in Bangalore, India, as we expand our technological frontiers. We are looking for a hands-on Head of Engineering Platforms to lead our next phase of growth and innovation. You will have the opportunity to lead a cutting-edge team at the intersection of finance and technology, driving innovation in cloud infrastructure, on-premise solutions, and service management. This is your chance to make a lasting impact on a rapidly growing industry and empower millions with accessible financial tools. Your Mission: - Architect and optimize our Azure-based cloud environments and on-premise infrastructure. - Champion operational excellence across our global trading platforms. - Lead and mentor a diverse, international team of top-tier engineers. - Drive continuous improvement in our tech stack and processes. - Ensure uncompromising security and compliance in a high-stakes financial environment. We're Looking For: We are seeking a visionary leader with 10+ years of experience in IT operations and infrastructure management. The ideal candidate will possess deep expertise in Azure services, hybrid cloud solutions, and on-premise systems. A proven track record in implementing ITIL-based service management processes, a strong background in financial or trading industry tech, and a passion for nurturing talent and building high-performing global teams are essential. Team Leadership: Lead a global team of Cloud Engineers, SREs, and NOC teams. Foster continuous learning and development to keep the team aligned with cutting-edge technologies. Required Skills: - Mastery of cloud technologies such as Azure DevOps, IaaS, and PaaS. - Proficiency with infrastructure automation tools like Terraform, Ansible, and monitoring tools. - Expert-level knowledge of security protocols and compliance requirements. - Strong vendor management and budgeting skills. - Bachelor's or Master's degree in Computer Science or related field. Preferred Qualifications: - ITIL certification. - Microsoft Azure certifications. - Experience scaling operations in emerging markets. What We Offer: - The opportunity to lead a tech revolution in the heart of India's Silicon Valley. - Competitive salary and benefits package. - Global exposure and career advancement opportunities. - Cutting-edge tech stack and resources. - Dynamic, innovative work culture. Are you ready to unleash your potential and lead a world-class engineering team Join us in our mission to democratize financial markets and redefine the future of digital finance. Apply now and let's reshape the global fintech landscape together. Please email sanish@careerxperts.com to get connected.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Are you ready to power the world's connections If you don't think you meet all of the criteria below but are still interested in the job, please apply. Nobody checks every box - we're looking for candidates who are particularly strong in a few areas and have some interest and capabilities in others. Design, develop, and maintain microservices that power Kong Konnect, the Service Connectivity Platform. Working closely with Product Management and teams across Engineering, you will develop software that has a direct impact on our customers" business and Kong's success. This opportunity is hybrid (Bangalore Based) with 3 days in the office and 2 days work from home. Implement, and maintain services that power high bandwidth logging and tracing services for our cloud platform such as indexing and searching logs and traces of API requests powered by Kong Gateway and Kuma Service Mesh. Implement efficient solutions at scale using distributed and multi-tenant cloud storage and streaming systems. Implement cloud systems that are resilient to regional and zonal outages. Participate in an on-call rotation to support services in production, ensuring high performance and reliability. Write and maintain automated tests to ensure code integrity and prevent regressions. Mentor other team members. Undertake additional tasks as assigned by the manager. 5+ years working in a team to develop, deliver, and maintain complex software solutions. Experience in log ingestion, indexing, and search at scale. Excellent verbal and written communication skills. Proficiency with OpenSearch/Elasticsearch and other full-text search engines. Experience with streaming platforms such as Kafka, AWS Kinesis, etc. Operational experience in running large-scale, high-performance internet services, including on-call responsibilities. Experience with JVM and languages such as Java and Scala. Experience with AWS and cloud platforms for SaaS teams. Experience designing, prototyping, building, monitoring, and debugging microservices architectures and distributed systems. Understanding of cloud-native systems like Kubernetes, Gitops, and Terraform. Bachelors or Masters degree in Computer Science. Bonus points if you have experience with columnar stores like Druid/Clickhouse/Pinot, working on new products/startups, contributing to Open Source Software projects, or working or developing L4/L7 proxies such as Nginx, HA-proxy, Envoy, etc. Kong is THE cloud native API platform with the fastest, most adopted API gateway in the world (over 300m downloads!). Loved by developers and trusted with enterprises" most critical traffic volumes, Kong helps startups and Fortune 500 companies build with confidence allowing them to bring solutions to market faster with API and service connectivity that scales easily and securely. 83% of web traffic today is API calls! APIs are the connective tissue of the cloud and the underlying technology that allows software to talk and interact with one another. Therefore, we believe that APIs act as the nervous system of the cloud. Our audacious mission is to build the nervous system that will safely and reliably connect all of humankind! For more information about Kong, please visit konghq.com or follow @thekonginc on Twitter.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

You will be as unique as your background, experience, and point of view. Here, you will be encouraged, empowered, and challenged to be your best self. You will work with dynamic colleagues - experts in their fields - who are eager to share their knowledge with you. Your leaders will inspire and help you reach your potential and soar to new heights. Every day, you will have new and exciting opportunities to make life brighter for our Clients - who are at the heart of everything we do. Discover how you can make a difference in the lives of individuals, families, and communities around the world. Role & Responsibilities Excellent knowledge and experience in Cloud architecture. Strong proficiency in Cloud platforms such as AWS. Relevant certifications in AWS (e.g., AWS Cloud Developer/ Architect). Excellent problem-solving skills. Strong ability to collaborate with cross-functional teams and stakeholders. Strong communication and collaboration skills, with experience working in cross-functional teams. Designing, deploying, and maintaining cloud infrastructure. Monitoring cloud resources for performance and cost optimization. Troubleshooting and resolving cloud-related issues. Develop, implement, and maintain configuration of underlying cloud services, including but not limited to AWS, EC2, EKS, S3, RDS, etc. Administer existing and deploy new AWS instances, and related infrastructure. Skills Proficiency in AWS cloud. Scripting and automation skills - Terraform, Cloud Formation Templates. Strong problem-solving abilities. Excellent communication and teamwork skills. A commitment to continuous learning and adaptability. Job Category: IT - Digital Development Posting End Date: 29/05/2025,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

You have the exciting opportunity to join a fast-growing data and analytics company in Europe, which is redefining how real-time, high-volume data is modeled, predicted, and delivered at scale. Recently recognized for outstanding growth and innovation, the company is now expanding its global footprint with a newly formed engineering team in Dubai, where you could be one of the first key hires. As a Senior Golang Engineer, you will be instrumental in designing, building, and operating a platform capable of handling tens of thousands of real-time requests per minute. Collaborating closely with product teams, data scientists, and engineers, you will contribute to delivering robust, high-performance systems in a fast-paced, innovation-led environment. Your responsibilities will include designing and building a cloud-native microservices platform, operating high-throughput distributed systems using Golang and AWS, solving complex scalability, multi-threading, and real-time data problems, and managing APIs such as GraphQL/gRPC to ensure low-latency performance. To excel in this role, you should have at least 5 years of professional experience working with Golang in production environments, a strong understanding of distributed systems, event-driven architecture, and microservices, hands-on experience with cloud infrastructure (preferably AWS), proficiency in working with databases like PostgreSQL, Redis, and/or DynamoDB, familiarity with modern API design patterns like GraphQL and gRPC, and knowledge of Docker or Terraform would be advantageous. In return, you will be offered a full-time role in Dubai with relocation support, visa sponsorship, and private healthcare insurance. You will work in a flat-structured, fast-paced work culture with global exposure and cutting-edge projects, providing a clear path for growth within a globally scaling company that prioritizes innovation and performance.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Principal DevOps Engineer at our company, you will play a crucial role in leading our DevOps initiatives to ensure the reliability, scalability, and security of our high-value transactional systems. Your responsibilities will include providing on-call support, responding to critical incidents, and analyzing issues to drive problem resolution with minimal downtime. You will also be responsible for incident response and escalation management for DevOps-related incidents. Collaboration with various stakeholders, including Service Reliability & Transition teams, CloudOps & Observability teams, SecOps teams, and Engineering teams, will be a key aspect of your role. You will work effectively in a multi-vendor enterprise environment to ensure seamless integration and communication. Additionally, you will guide and mentor DevOps engineers, provide training support, and drive work division and prioritization to optimize team performance. In terms of technical expertise, you should have experience with AWS services such as S3, Lambda, IAM, and CloudWatch, as well as infrastructure as code tools like Terraform and CloudFormation. Knowledge of CI/CD pipelines, Kubernetes, and observability tools is essential. You should also have experience with containerization and orchestration technologies like Kubernetes and RDS, as well as security compliance and best practices in a regulated environment. To be successful in this role, you should have at least 10 years of experience in DevOps, Cloud, or Site Reliability Engineering roles, with a strong understanding of AWS cloud services and infrastructure. Excellent problem-solving, communication, and stakeholder management skills are crucial, along with experience in multi-vendor enterprise environments and regulated industries. At GlobalLogic, we prioritize a culture of caring and offer continuous learning and development opportunities. You will have the chance to work on interesting and meaningful projects while enjoying balance and flexibility in your work-life integration. Join us in our mission to engineer impact for and with clients around the world, shaping the future through innovative digital solutions.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

We empower our people to stay resilient and relevant in a constantly changing world. We are looking for individuals who are always seeking creative ways to grow and learn, individuals who aspire to make a real impact, both now and in the future. If this resonates with you, then you would be a valuable addition to our dynamic international team. We are currently seeking a Senior Software Engineer - Data Engineer (AI Solutions). In this role, you will have the opportunity to: - Design, build, and maintain data pipelines to cater to the requirements of various stakeholders, including software developers, data scientists, analysts, and business teams. - Ensure that the data pipelines are modular, resilient, and optimized for performance and low maintenance. - Collaborate with AI/ML teams to support training, inference, and monitoring needs through structured data delivery. - Implement ETL/ELT workflows for structured, semi-structured, and unstructured data using cloud-native tools. - Work with large-scale data lakes, streaming platforms, and batch processing systems to ingest and transform data. - Establish robust data validation, logging, and monitoring strategies to uphold data quality and lineage. - Optimize data infrastructure for scalability, cost-efficiency, and observability in cloud-based environments. - Ensure adherence to governance policies and data access controls across projects. To excel in this role, you should possess the following qualifications and skills: - A Bachelor's degree in Computer Science, Information Systems, or a related field. - Minimum of 4 years of experience in designing and deploying scalable data pipelines in cloud environments. - Proficiency in Python, SQL, and data manipulation tools and frameworks such as Apache Airflow, Spark, dbt, and Pandas. - Practical experience with data lakes, data warehouses (e.g., Redshift, Snowflake, BigQuery), and streaming platforms (e.g., Kafka, Kinesis). - Strong understanding of data modeling, schema design, and data transformation patterns. - Experience with AWS (Glue, S3, Redshift, Sagemaker) or Azure (Data Factory, Azure ML Studio, Azure Storage). - Familiarity with CI/CD for data pipelines and infrastructure-as-code (e.g., Terraform, CloudFormation). - Exposure to building data solutions that support AI/ML pipelines, including feature stores and real-time data ingestion. - Understanding of observability, data versioning, and pipeline testing tools. - Previous engagement with diverse stakeholders, data requirement gathering, and support for iterative development cycles. - Background or familiarity with the Power, Energy, or Electrification sector is advantageous. - Knowledge of security best practices and data compliance policies for enterprise-grade systems. This position is based in Bangalore, offering you the opportunity to collaborate with teams that impact entire cities, countries, and shape the future. Siemens is a global organization comprising over 312,000 individuals across more than 200 countries. We are committed to equality and encourage applications from diverse backgrounds that mirror the communities we serve. Employment decisions at Siemens are made based on qualifications, merit, and business requirements. Join us with your curiosity and creativity to help shape a better tomorrow. Learn more about Siemens careers at: www.siemens.com/careers Discover the Digital world of Siemens here: www.siemens.com/careers/digitalminds,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

As an experienced Information Security professional with 8+ years of experience, you will be responsible for planning, implementing, managing, and maintaining security systems such as antimalware solutions, vulnerability management solutions, and SIEM solutions. Your role will involve monitoring and investigating security alerts from various sources, providing incident response, and identifying potential weaknesses within the organization's network and systems to recommend effective solutions. Additionally, you will take up security initiatives to enhance the overall security posture of the organization. You will be required to document Standard Operating Procedures (SOPs), metrics, and reports as necessary, provide Root Cause Analyses (RCAs) for security incidents, and collaborate with different teams and departments to address vulnerabilities, security incidents, and drive security initiatives. Moreover, researching and monitoring emerging threats and vulnerabilities, understanding current industry and technology trends, and assessing their impact on applications will be crucial aspects of your role. Your qualifications should include industry-recognized professional certifications such as CISSP, GCSA, CND, or similar certifications. Demonstrated experience in computer security with a focus on risk analysis, audit, and compliance objectives is essential. Proficiency in Network and Web Security tools like Palo Alto, ForeScout, and Zscaler, as well as experience in AWS Cloud Environment and Privileged Access Management solutions, will be advantageous. Familiarity with SIEM/SOAR, NDR, EDR, VM, and Data Security solutions and concepts is desired. The ideal candidate will possess strong decision-making and complex problem-solving skills under pressure, along with a high degree of creativity and "out-of-the-box" thinking. The ability to manage multiple projects simultaneously in fast-paced environments, a service-oriented approach, and excellent communication, presentation, and writing skills are key requirements for this role. You should also be adept at sharing knowledge, collaborating with team members and customers, and adapting to a fast-paced, ever-changing global environment. Strong organization, time management, and priority-setting skills are essential, along with a proactive approach to achieving results. In summary, this role offers an exciting opportunity for an experienced Information Security professional to contribute to the enhancement of the organization's security posture, collaborate with diverse teams, and stay abreast of emerging threats and industry trends.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Network Operations Center (NOC) Analyst at Inspire Brands, you will be responsible for overseeing all technology aspects of the organization. Your primary role will involve acting as the main technology expert for the NOC team, ensuring the detection and resolution of issues in production before they impact the large scale operations. It will be your duty to guarantee that the services provided by the Inspire Digital Platform (IDP) meet user needs in terms of reliability, uptime, and continuous improvement. Additionally, you will play a crucial role in ensuring an outstanding customer experience by establishing service level agreements that align with the business model. In the technical aspect of this role, you will be required to develop and monitor various monitoring dashboards to identify problems related to applications, infrastructure, and potential security incidents. Providing operational support for multiple large, distributed software applications will be a key responsibility. Your deep troubleshooting skills will be essential in enhancing availability, performance, and security to ensure 24/7 operational readiness. Conducting thorough postmortems on production incidents to evaluate business impact and facilitate learning for the Engineering team will also be part of your responsibilities. Moreover, you will create dashboards and alerts for monitoring the platform, define key metrics and service level indicators, and ensure the collection of relevant metric data to create actionable alerts for the responsible teams. Participation in the 24/7 on-call rotation and automation of tasks to streamline application deployment and third-party tool integration will be crucial. Analyzing major incidents, collaborating with other teams to find permanent solutions, and establishing and publishing regular KPIs and metrics for measuring performance, stability, and customer satisfaction will also be expected from you. In terms of qualifications, you should hold a 4-year degree in computer science, Information Technology, or a related field. You should have a minimum of 5 years of experience in a production support role, specifically supporting large scale SAAS Production B2C or B2B Cloud Platforms, with a strong background in problem-solving and troubleshooting. Additionally, you should possess knowledge and skills in various technologies such as Java, TypeScript, Python, Azure Cloud services, monitoring tools like Splunk and Prometheus, containers, Kubernetes, Helm, Cloud networking, Firewalls, and more. Overall, this role requires strong technical expertise, effective communication skills, and a proactive approach to ensuring the smooth operation of Inspire Brands" technology infrastructure.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

The role of HV Product at Hitachi Vantara is pivotal in the development of the VSP 360 platform's on-premises solution, ensuring strict adherence to delivery objectives. The VSP 360 platform is the cornerstone of the organization's management solution strategy. As a member of our global team, you will play a key role in empowering businesses to automate, optimize, innovate, and wow their customers with high-performance data infrastructure. To excel in this role, you should possess a Bachelor's degree in computer science or a related field, along with 6+ years of experience in DevOps or a related field. Your strong experience with cloud-based services, running Kubernetes as a service, managing Kubernetes clusters, and infrastructure automation and deployment tools such as Terraform, Ansible, Docker, Jenkins, GitHub, and GitHub Actions will be vital in driving the success of the VSP 360 platform. Additionally, your familiarity with monitoring tools like Grafana, Nagios, ELK, OpenTelemetry, Prometheus, Anthos/Istio Service Mesh, Cloud Native Computing Foundation (CNCF) projects, Kubernetes Operators, KeyCloak, and Linux systems administration will be highly beneficial. It would be advantageous to have proficiency in Python, Django, AWS solution design, cloud-based storage (S3, Blob, Google Storage), and storage area networks (SANs). At Hitachi Vantara, we value diversity, equity, and inclusion, as they are integral to our culture and identity. We encourage individuals from all backgrounds to apply, as we believe that diverse thinking and a commitment to allyship lead to powerful results. As part of our team, you will be supported with industry-leading benefits, services, and flexible arrangements to ensure your holistic health and wellbeing. We champion life balance and offer autonomy, freedom, and ownership in your work. Join us in co-creating meaningful solutions to complex challenges and becoming a data-driven leader that positively impacts industries and society. If you are passionate about innovation and believe in inspiring the future, Hitachi Vantara is the place for you to fulfill your purpose and reach your full potential.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Backend Engineer on the API and Orchestration Services team at Hitachi Vantara, you will play a crucial role in designing, building, and maintaining backend APIs and services that power the intelligence layer of Hitachi IQ. Your responsibilities will include developing scalable, secure, and high-performance APIs using Java and Python, as well as building orchestration services for managing compute, storage, FC, and Ethernet infrastructure. You will support and extend platform capabilities for multi-tenancy, RBAC, lifecycle automation, and Day 0-N operations. Your role will involve collaborating across cross-functional teams to ensure seamless integration between hardware and software components. You will lead efforts to resolve performance bottlenecks, troubleshoot complex issues, and implement reliable, scalable fixes. Additionally, you will participate in Agile ceremonies, contribute to release cycles, POCs, prototypes, and customer issue resolutions. To excel in this role, you should have a Bachelor's degree in computer science or equivalent field, along with 5-10 years of hands-on software development experience in enterprise products or infrastructure platforms. You should be proficient in Java (Spring Boot) and have strong experience with Python. Experience building infrastructure-aware applications for both AWS cloud and on-premises datacenters is essential. Furthermore, you should have a deep understanding of multithreading, parallel programming, Java Streams, and design principles. It would be advantageous to have knowledge of hardware platforms such as servers, switches, FC SAN, or enterprise storage arrays, as well as experience with infrastructure-as-code tools like Terraform and Ansible. Familiarity with event-driven architectures, message queues, enterprise compliance frameworks, and operational monitoring is also a plus. At Hitachi Vantara, we value diversity, equity, and inclusion, and we encourage individuals from all backgrounds to apply and realize their full potential as part of our team. We offer industry-leading benefits, support, and services to look after your holistic health and wellbeing, as well as flexible arrangements that promote work-life balance. Join us in co-creating meaningful solutions to complex challenges and turning organizations into data-driven leaders that make a positive impact on industries and society.,

Posted 1 week ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Role & Responsibilities Collaborate with teams to design, build, and deliver solutions implementing serverless, microservices-based, IaaS, PaaS, and containerized architectures in AWS, Azure, and GCP cloud environments. Develop reusable and parameterized Infrastructure as Code (IaC) templates for the automated deployment of cloud resources using Terraform and/or CloudFormation. Build and manage CI/CD pipelines using AWS CodeBuild, CodeDeploy, CodePipeline, and equivalent tools in Azure DevOps and Google Cloud Build. Integrate 3rd party tools with CI/CD processes (e.g., SonarQube). Perform configuration management using industry-standard DevOps tools (e.g., Ansible). Implement scripting to enhance build, deployment, and monitoring processes using PowerShell, Bash, or Python. Work with Windows and Linux systems in cloud environments. Install, configure, and manage DevOps tools across AWS, Azure, and GCP. Required Skills 8+ years of senior-level experience in AWS, Azure, and GCP in a DevOps or Engineering role, with a proven track record in client communications and project management. Strong leadership and team management skills, including mentoring junior team members. Expertise in DevOps tools and concepts, including Docker, Kubernetes (K8s), Terraform, and multi-cloud environments. Ability to develop and lead CI/CD pipelines using AWS, Azure, and GCP-native tools. Strong understanding of Cloud Networking, including VPCs, NSGs, DNS, and Routing. In-depth knowledge of Linux and Windows operating systems, including endpoint protection. Experience diagnosing and troubleshooting complex system and application issues, with the ability to provide strategic solutions. Exceptional problem-solving, analytical, and critical thinking skills. Excellent communication and interpersonal skills, with the ability to manage client relationships and explain technical concepts to non-technical stakeholders. Experience: 8+ Years Location: Ahmedabad (Work From Office)

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies