Home
Jobs

9125 Terraform Jobs - Page 11

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are hiring a DevOps Engineer with a minimum of 2 years of experience in deploying, automating, and maintaining applications in both cloud and on-premises environments. The ideal candidate will have hands-on expertise in CI/CD pipelines, containerization, monitoring, infrastructure automation, and multi-region deployments. A strong focus on high availability, performance tuning, and DevSecOps practices is essential. Deployment Management The core responsibilities for the job include the following: Deploy and manage applications on AWS, Azure, or GCP. Manage on-prem deployments using VMware ESXi, Proxmox, or KVM. Configure and manage load balancers like Nginx, HAProxy. Ensure high availability with multi-region deployment strategies. Infrastructure Automation Automate infrastructure using Terraform, Ansible. Provision virtual machines and containers using Packer and Docker. Use version-controlled IaC practices with Git. CI/CD Pipelines Build and manage pipelines with Jenkins, GitLab CI/CD, or GitHub Actions. Integrate testing, security scanning, and deployment into pipeline flows. Automate release management and rollback procedures. Monitoring And Logging Set up monitoring using Prometheus, Grafana, or Nagios. Configure log management using ELK Stack (Elasticsearch, Logstash, Kibana) or Graylog. Analyze metrics to identify performance issues and prevent downtime. Security And Compliance Apply DevSecOps practices using Vault, Trivy, or SonarQube. Enforce security baselines with CIS benchmarks. Manage secrets and secure configuration in pipelines and runtime. Collaboration And Documentation Work with development, QA, and infrastructure teams to support delivery. Create and maintain system architecture and runbook documentation. Contribute to disaster recovery and business continuity planning. Requirements Minimum 2 years of experience in DevOps or infrastructure engineering. Strong understanding of Linux (Ubuntu, CentOS, or RHEL). Hands-on experience with cloud platforms like AWS, Azure, or GCP. Practical experience with Terraform, Ansible, Git, and Docker. Proficient in CI/CD with Jenkins, GitLab, or GitHub Actions. Experience with Kubernetes or Docker Swarm in cloud or on-prem environments. Good grasp of networking concepts like firewalls, DNS, VPN, and NAT. Soft Skills Excellent problem-solving and debugging skills. Effective communication and teamwork capabilities. Ability to document clearly and follow structured procedures. Education Bachelor's degree in Computer Science, Engineering, or equivalent work experience. Preferred Qualifications Experience with OpenStack or Rancher for on-prem private cloud. Familiarity with disaster recovery and backup solutions. This job was posted by Anajli Kanojiya from iAI Solution. Show more Show less

Posted 1 day ago

Apply

15.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

We are seeking a highly skilled and experienced Application Architect with a strong background in designing and architecting both user interfaces and backend Java microservices, with significant exposure to Amazon Web Services (AWS). As an Application Architect, you will be responsible for defining the architectural vision, ensuring scalability, performance, and maintainability of our applications. You will collaborate closely with engineering teams, product managers, and other stakeholders to deliver robust and innovative solutions. Responsibilities Architectural Design and Vision: Define and communicate the architectural vision and principles for both frontend and backend systems. Design scalable, resilient, and secure application architectures leveraging Java microservices and cloud-native patterns on AWS. Develop and maintain architectural blueprints, guidelines, and standards. Evaluate and recommend technologies, frameworks, and tools for both UI and backend development. Ensure alignment of architectural decisions with business goals and technical strategy. UI Architecture and Development Guidance: Provide architectural guidance and best practices for developing modern and responsive user interfaces (e.g., using React, Angular, Vue.js). Define UI architecture patterns, component design principles, and state management strategies. Ensure UI performance, accessibility, and user experience considerations are integrated into the architecture. Collaborate with UI developers and designers to ensure technical feasibility and optimal implementation of UI designs. Backend Microservices Architecture and Development Guidance: Design and architect robust and scalable backend systems using Java and microservices architecture. Define API contracts, data models, and integration patterns for microservices. Ensure the security, reliability, and performance of backend services. Provide guidance to backend Java developers on best practices, coding standards, and architectural patterns. AWS Cloud Architecture and Deployment: Design and implement cloud-native solutions on AWS, leveraging services such as EC2, ECS/EKS, Lambda, S3, RDS, DynamoDB, API Gateway, etc. Define infrastructure-as-code (IaC) strategies using tools like CloudFormation or Terraform. Architect for high availability, fault tolerance, and disaster recovery on AWS. Optimize cloud costs and ensure efficient resource utilization. Implement security best practices and compliance standards within the AWS environment. Collaboration and Communication: Collaborate effectively with engineering managers, product managers, QA, DevOps, and other stakeholders. Communicate architectural decisions and trade-offs clearly and concisely to both technical and non-technical audiences. Facilitate technical discussions and resolve architectural challenges. Mentor and guide engineering teams on architectural best practices and technology adoption. Technology Evaluation and Adoption: Research and evaluate new technologies and trends in UI frameworks, Java ecosystems, and AWS services. Conduct proof-of-concepts and feasibility studies for new technologies. Define adoption strategies for new technologies within the organization. Performance and Scalability: Design systems with a focus on performance, scalability, and maintainability. Identify and address potential performance bottlenecks and scalability limitations. Define and implement monitoring and alerting strategies for applications and infrastructure. Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 15 + years of experience in software development with a strong focus on Java. 5 + years of experience in designing and architecting complex applications, including both UI and backend systems. Deep understanding of microservices architecture principles and best practices. Strong expertise in Java and related frameworks (e.g., Spring Boot, Jakarta EE). Solid experience with modern UI frameworks (e.g., React, Angular, Vue.js) and their architectural patterns. Significant hands-on experience with Amazon Web Services (AWS) and its core services. Experience with containerization technologies (e.g., Docker, Kubernetes) and orchestration on AWS (ECS/EKS). Proficiency in designing and implementing RESTful APIs and other integration patterns. Understanding of database technologies (both relational and NoSQL) and their integration with microservices on AWS. Experience with infrastructure-as-code (IaC) tools like CloudFormation or Terraform. Strong understanding of security best practices for both UI and backend applications in a cloud environment. Excellent communication, presentation, and interpersonal skills. Proven ability to lead technical discussions and influence architectural decisions. Preferred Qualifications Experience with event-driven architectures and messaging systems (e.g., Kafka, SQS). Familiarity with CI/CD pipelines and DevOps practices on AWS. Experience with performance testing and optimization techniques. Knowledge of different architectural patterns (e.g., CQRS, Event Sourcing). Experience in [Mention any specific domain or industry relevant to your company]. AWS certifications (e.g., AWS Certified Solutions Architect – Associate/Professional). Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Cloud DevOps Architect Location: Pune, India Experience: 10 - 15 Years Work Mode: Full-time, Office-based Company : Smartavya Analytica Private Limited Company Overview: Smartavya Analytica is a niche Data and AI company based in Mumbai, established in 2017. We specialize in data-driven innovation, transforming enterprise data into strategic insights. With expertise spanning over 25+ Data Modernization projects and handling large datasets up to 24 PB in a single implementation, we have successfully delivered data and AI projects across multiple industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are specialists in Cloud, Hadoop, Big Data, AI, and Analytics, with a strong focus on Data Modernization for OnPremises, Private, and Public Cloud Platforms. Visit us at: https://smart-analytica.com Job Summary: We are looking for an accomplished Cloud DevOps Architect to design and implement robust DevOps and Infrastructure Automation frameworks across Azure, GCP, or AWS environments. The ideal candidate will have a deep understanding of CI/CD , IaC , VPC Networking , Security , and Automation using Terraform or Ansible . Key Responsibilities: Architect and build end-to-end DevOps pipelines using native cloud services (Azure DevOps, AWS CodePipeline, GCP Cloud Build) and third-party tools (Jenkins, GitLab, etc.). Define and implement foundation setup architecture (Azure, GCP and AWS) as per the recommended best practices. Design and deploy secure VPC architectures , manage networking, security groups, load balancers, and VPN gateways. Implement Infrastructure as Code (IaC) using Terraform or Ansible for scalable and repeatable deployments. Establish CI/CD frameworks integrating with Git, containers, and orchestration tools (e.g., Kubernetes, ECS, AKS, GKE). Define and enforce cloud security best practices including IAM, encryption, secrets management, and compliance standards. Collaborate with application, data, and security teams to optimize infrastructure, release cycles, and system performance. Drive continuous improvement in automation, observability, and incident response practices. Must-Have Skills: 10- 5 years of experience in DevOps, Infrastructure, or Cloud Architecture roles. Deep hands-on expertise in Azure , GCP , or AWS cloud platforms (any one is mandatory, more is a bonus). Strong knowledge of VPC architecture , Cloud Security , IAM , and Networking principles . Expertise in Terraform or Ansible for Infrastructure as Code. Experience building resilient CI/CD pipelines and automating application deployments. Strong troubleshooting skills across networking, compute, storage, and containers. Preferred Certifications: Azure DevOps Engineer Expert / AWS Certified DevOps Engineer Professional / Google Professional DevOps Engineer HashiCorp Certified: Terraform Associate (Preferred for Terraform users) Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Provide support for data production systems in Nielsen Technology International Media (Television and Radio audio measurement) playing a critical role in ensuring the reliability, scalability, and security. Configure, implement, and deploy audience measurement solutions. Provide expert-level support, leading infrastructure automation initiatives, driving continuous improvement across our DevOps practices and supporting Agile processes. Core Technologies: Linux, Airflow, Bash, CI/CD, AWS services (EC2, S3, RDS, EKS, VPC), PostgreSQL, Python, Kubernetes. Responsibilities: Architect, manage, and optimize scalable and secure cloud infrastructure (AWS) using Infrastructure as Code (Terraform, CloudFormation, Ansible). Implement and maintain robust CI/CD pipelines to streamline software deployment and infrastructure changes. Identify and implement cost-optimization strategies for cloud resources. Ensure the smooth operation of production systems across 30+ countries, providing expert-level troubleshooting and incident response. Manage cloud-related migration changes and updates, supporting the secure implementation of changes/fixes. Participate in 24/7 on-call rotation for emergency support. Key Skills: Proficiency in Linux OS, particularly Fedora and Debian-based distributions (AlmaLinux, Amazon Linux, Ubuntu). Strong proficiency in scripting languages (Bash, Python) and SQL. Knowledge of scripting languages (Bash, SQL). Versed in leveraging Automations / DevOps principles with understanding of CI/CD concepts Working knowledge of infrastructure-as-code tools like Terraform, CloudFormation, and Ansible. Solid experience with AWS core services (EC2, EKS, S3, RDS, VPC, IAM, Security Groups). Hands-on experience with Docker, Kubernetes for containerized workloads. Solid understanding of DevOps practices, including monitoring, security, and high-availability design. Hands-on experience with Apache Airflow for workflow automation and scheduling. Strong troubleshooting skills, with experience in resolving issues and handling incidents in production environments. Foundational understanding of modern networking principles and cloud network architectures. Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Title: Senior DevOps Engineer – Azure Platform Location: Delhi-NCR Experience: 8 – 10 years (minimum 5 years hands-on with Azure DevOps and cloud infrastructure) ________________________________________ Job Description: We are seeking an experienced and highly skilled Senior DevOps Engineer with a strong background in Microsoft Azure to support the implementation, automation, and management of our cloud-based infrastructure. The ideal candidate will have hands-on experience with Azure DevOps, ARM/Bicep/Terraform templates, CI/CD pipelines, and cloud-native services in Azure. This is a strategic role supporting a foreign client, and excellent communication (verbal & written) is essential. ________________________________________ Key Responsibilities: • Design, implement, and manage secure, scalable infrastructure using Azure services. • Build and maintain automated CI/CD pipelines using Azure DevOps. • Automate infrastructure provisioning using Infrastructure as Code (IaC) with Bicep, ARM Templates, or Terraform. • Manage and monitor Azure resources (AKS, App Services, Azure Functions, Storage, Key Vault, etc.). • Configure and maintain Azure Monitor, Log Analytics, and Application Insights. • Set up and maintain secure connectivity via VNETs, NSGs, Private Endpoints, and ExpressRoute/VPN. • Implement robust RBAC, identity and access management (IAM), and Secrets Management via Azure Key Vault. • Support release management, version control, and branching strategies with Git/Azure Repos. • Collaborate with development teams to optimize build/test/release workflows. • Ensure system reliability, availability, performance, and security through monitoring and proactive response. • Participate in architecture reviews, disaster recovery planning, and security compliance audits. ________________________________________ Must-Have Skills: • Strong hands-on experience in Microsoft Azure ecosystem. • Proficiency in Azure DevOps (Repos, Pipelines, Boards, Artifacts). • Expert in Infrastructure as Code: Bicep, Terraform, or ARM Templates. • Solid experience with containerization and orchestration: Docker & Azure Kubernetes Service (AKS). • Experience with CI/CD pipelines, including multi-stage YAML pipelines in Azure DevOps. • Deep understanding of networking, security, and identity in Azure. • Monitoring, logging, and alerting using Azure Monitor and related tools. • Scripting experience in PowerShell, Bash, or Python. ________________________________________ Preferred Qualifications: • Azure Certifications (e.g., AZ-400, AZ-104, AZ-305). • Experience with GitHub Actions or GitLab CI/CD (nice to have). • Exposure to DevSecOps and automated compliance frameworks. • Familiarity with cost optimization strategies on Azure. • Understanding of ISO 27001, SOC 2, or other compliance standards is a plus. ________________________________________ Soft Skills: • Strong written and verbal communication skills, especially for working with international stakeholders. • Self-starter with the ability to work independently and in a cross-functional team. • Strong problem-solving and analytical mindset. Show more Show less

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 day ago

Apply

10.0 - 14.0 years

35 - 50 Lacs

Chennai

Work from Office

Naukri logo

Job Summary Implement workload identity solutions for containerized and serverless workloads e.g. Kubernetes Lambda) in alignment with overall workload IAM strategy. Configure and manage workload identities within cloud-native platforms Responsibilities Ensure that containerized applications and orchestration systems e.g. Kubernetes are configured to securely utilize workload identities. Implement best practices for managing workload identities in containerized deployments. Automate the provisioning and deprovisioning of workload identities in response to cloud-native workload lifecycle events e.g. container creation or deletion scaling. Implement security best practices for workload identities in cloud native environments Integrate cloud-native workloads with enterprise identity providers using workload identity federation Implement SPIFFE SPIRE for workload identity management in cloud-native environments if required Collaborate with security and operations teams to ensure that workload identity solutions meet the security and operational requirements of cloud-native applications Certifications Required Azure GCP

Posted 1 day ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

About The Role HashiCorp is looking for a high-caliber customer facing engineering professional to join its Support Engineering team in Noida, India. This is an exciting opportunity to join a small team and have a direct impact on HashiCorp’s fast growing business. This highly visible position will be an integral part of both the support engineering and Terraform Open Source/Enterprise teams. You are a fit if you thrive in a fast-paced culture that values essential communication, collaboration, and results. You are a self-motivated, detail-oriented individual with an eye for automation, process improvement, and problem solving. Reporting to the Manager, Support Engineering, the Support Engineer will be a key member of the Customer Success organization and will directly impact customer satisfaction and success. The Support engineer will troubleshoot complex issues related to Terraform Enterprise and independently work to find viable solutions. They will contribute to product growth and development via weekly product and marketing meetings. The Support Engineer will attend customer meetings as needed to help identify, debug and resolve the customer issue and is expected to be a liaison between the customer and HashiCorp engineering. When possible the Support Engineer will update and improve product documentation, guide feature development, and implement bug fixes based on customer feedback. Responsibilities Triage and solve incoming support requests via Zendesk within SLA Document and record all activity and communication with customers in accordance to both internal and external security standards Reproduce and debug customer issues by building or using existing tooling or configurations Collaborate with engineers, sales engineers, sales representatives, and technical account managers to schedule, coordinate, and lead customer installs or debugging calls Contribute to create knowledge base articles, and best practices guides Continuously improve process and tools for normal, repetitive support tasks Periodic on-call rotation for production-down issues Weekly days off scheduled every week on rotation on any day of the week Requirements 4+ years Support Engineering, Software Engineering, or System Administration experience Expertise in Open Source and SaaS is a major advantage Excellent presence; strong written and verbal communication skills Upbeat, passionate, and unparalleled customer focus Well-organized, has excellent work ethic, pays attention to detail, and self-starting Experience managing and influencing change in organizations Working knowledge with Docker, Kubernetes Familiar with networking concept Experience developing a program, script, or tool that was released or used is an advantage Strong understanding of Linux or Windows command line environments Interest in cloud adoption and technology at scale Goals 30 days: you should be able to - Write a simple TF configuration and apply it in TFE to deploy infrastructure Holistic understanding of (P)TFE and the interaction with the TF ecosystem Successfully perform all common work flows within Terraform Enterprise One contribution to extend or improve product documentation or install guides Ability to answer Level 1 support inquiries with minimal assistance 60 days: you should be able to - Effectively triage and respond to Level 1 & 2 inquiries independently Provision and bootstrap (P)TFE instance with low-touch from engineering Ride along on 1-2 live customer install calls Locate and unpack the customer log files. Familiarity with its contents Apply TF configurations to deploy infrastructure in AWS, Azure, and Google Cloud Author one customer knowledge base article from area of subject matter expertise 90 days: you should be able to - Effectively triage and respond to a production down issue with minimal assistance Run point on a live customer install without assistance Independently find points of error and identify root cause in the customer log files and report relevant details to engineering Implement small bug fixes or feature improvements Reproduce a TF bug or error by creating a suitable configuration EDUCATION Bachelor’s degree in Computer Science, IT, Technical Writing, or equivalent professional experience “HashiCorp is an IBM subsidiary which has been acquired by IBM and will be integrated into the IBM organization. HashiCorp will be the hiring entity. By proceeding with this application you understand that HashiCorp will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here: link to IBM privacy statement .” Show more Show less

Posted 1 day ago

Apply

10.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Role Overview We are looking for a Senior Backend Engineer with deep expertise in Python and scalable system architecture. This is a hands-on individual contributor (IC) role where you’ll design and develop high-performance, cloud-native backend services for enterprise-scale platforms. You’ll work closely with cross-functional teams to deliver robust, production-grade solutions. Key Responsibilities Design and build distributed, microservices-based systems using Python Develop RESTful APIs, background workers, schedulers, and scalable data pipelines Lead architecture discussions, technical reviews, and proof-of-concept initiatives Model data using SQL and NoSQL technologies (PostgreSQL, MongoDB, DynamoDB, ClickHouse) Ensure high availability and observability using tools like CloudWatch, Grafana, and Datadog Automate infrastructure and CI/CD workflows using Terraform, GitHub Actions, or Jenkins Prioritize security, scalability, and fault-tolerance across all services Own the entire lifecycle of backend components—from development to production support Document system architecture and contribute to internal knowledge sharing Requirements 10+ years of backend development experience with strong Python proficiency Deep understanding of microservices, Docker, Kubernetes, and cloud-native development (AWS preferred) Expertise in API design, authentication (OAuth2), rate limiting, and best practices Experience with message queues and async systems (Kafka, SQS, RabbitMQ) Strong database knowledge—both relational and NoSQL Familiarity with DevOps tools: Terraform, CloudFormation, GitHub Actions, Jenkins Effective communicator with experience working in distributed, fast-paced teams Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Here at Appian, our core values of Respect, Work to Impact, Ambition, and Constructive Dissent & Resolution define who we are. In short, this means we constantly seek to understand the best for our customers, we go beyond completion in our work, we strive for excellence with intensity, and we embrace candid communication. These values guide our actions and shape our culture every day. When you join Appian, you'll be part of a passionate team that's dedicated to accomplishing hard things. As a DevOps & Test Infrastructure Engineer your goal is to design, implement, and maintain a robust, scalable, and secure AWS infrastructure to support our growing testing needs. You will be instrumental in building and automating our DevOps pipeline, ensuring efficient and reliable testing processes. This role offers the opportunity to shape our performance testing environment and contribute directly to the quality and speed of our clients’ Appian software delivery. Responsibilities Architecture Design: Design and architect a highly scalable and cost-effective AWS infrastructure tailored for testing purposes, considering security, performance, and maintainability. DevOps Pipeline Design: Architect a secure and automated DevOps pipeline on AWS, integrating tools such as Jenkins for continuous integration/continuous delivery (CI/CD) and Locust for performance testing. Infrastructure as Code (IaC): Implement infrastructure as code (IaC) using tools like Terraform or AWS CloudFormation to enable automated deployment and scaling of the testing environment. Security Implementation: Implement and enforce security best practices across the AWS infrastructure and DevOps pipeline, ensuring compliance and protecting sensitive data. Jenkins or similar CI/CD automation platforms Configuration & Administration: Install, configure, and administer Jenkins, including setting up build pipelines, managing plugins, and ensuring its scalability and reliability. Locust Configuration & Administration: Install, configure, and administer Locust for performance and load testing. Automation: Automate the deployment, scaling, and management of all infrastructure components and the DevOps pipeline. Monitoring and Logging: Implement comprehensive monitoring and logging solutions to proactively identify and resolve issues within the testing environment, including also exposing testing results available for consumption. Troubleshooting and Support: Provide expert-level troubleshooting and support for the testing infrastructure and DevOps pipeline. Collaboration: Work closely with development, QA, and operations teams to understand their needs and provide effective solutions. Documentation: Create and maintain clear and concise documentation for the infrastructure, pipeline, and processes. Continuous Improvement: Stay up-to-date with the latest AWS services and DevOps best practices, and proactively identify opportunities for improvement. Qualifications Proven experience in designing and implementing scalable architectures on Amazon Web Services (AWS). Strong understanding of DevOps principles and practices. Hands-on experience with CI/CD tools, for example Jenkins, including pipeline creation and administration. Experience with performance testing tools, preferably Locust, including test design and execution. Proficiency in infrastructure as code (IaC) tools such as Terraform or AWS CloudFormation. Solid understanding of security best practices in cloud environments. Experience with containerization technologies like Docker and orchestration tools like Kubernetes or AWS ECS (preferred). Familiarity with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack, CloudWatch). Excellent scripting skills (e.g., Python, Bash). Strong problem-solving and analytical skills. Excellent communication and collaboration skills. Ability to work independently and as part of a team. AWS certifications (e.g., AWS Certified Solutions Architect – Associate/Professional, AWS Certified DevOps Engineer – Professional). Experience with other testing tools and frameworks. Experience with agile development methodologies. Education B.S. in Computer Science, Engineering, Information Systems, or related field. Working Conditions Opportunity to work on enterprise-scale applications across different industries. This role is based at our office at WTC 11th floor, Old Mahabalipuram Road, SH 49A, Kandhanchavadi, Kottivakkam, Chennai, Tamil Nadu 600041, India. Appian was built on a culture of in-person collaboration, which we believe is a key driver of our mission to be the best. Employees hired for this position are expected to be in the office 5 days a week to foster that culture and ensure we continue to thrive through shared ideas and teamwork. We believe being in the office provides more opportunities to come together and celebrate working with the exceptional people across Appian. Tools and Resources Training and Development: During onboarding, we focus on equipping new hires with the skills and knowledge for success through department-specific training. Continuous learning is a central focus at Appian, with dedicated mentorship and the First-Friend program being widely utilized resources for new hires. Growth Opportunities: Appian provides a diverse array of growth and development opportunities, including our leadership program tailored for new and aspiring managers, a comprehensive library of specialized department training through Appian University, skills based training, and tuition reimbursement for those aiming to advance their education. This commitment ensures that employees have access to a holistic range of development opportunities. Community: We’ll immerse you into our community rooted in respect starting on day one. Appian fosters inclusivity through our 8 employee-led affinity groups. These groups help employees build stronger internal and external networks by planning social, educational, and outreach activities to connect with Appianites and larger initiatives throughout the company. About Appian Appian is a software company that automates business processes. The Appian AI-Powered Process Platform includes everything you need to design, automate, and optimize even the most complex processes, from start to finish. The world's most innovative organizations trust Appian to improve their workflows, unify data, and optimize operations—resulting in better growth and superior customer experiences. For more information, visit appian.com. [Nasdaq: APPN] Follow Appian: Twitter, LinkedIn. Appian is an equal opportunity employer that strives to attract and retain the best talent. All qualified applicants will receive consideration for employment without regard to any characteristic protected by applicable federal, state, or local law. Appian provides reasonable accommodations to applicants in accordance with all applicable laws. If you need a reasonable accommodation for any part of the employment process, please contact us by email at ReasonableAccommodations@appian.com. Please note that only inquiries concerning a request for reasonable accommodation will be responded to from this email address. Appian's Applicant & Candidate Privacy Notice Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Everything we do is powered by our customers! Featured on Deloitte's Technology Fast 500 list and G2's leaderboard, Maropost offers a connected experience that our customers anticipate, transforming marketing, merchandising, and operations with commerce tools designed to scale with fast-growing businesses. With a relentless focus on our customers’ success, we are motivated by curiosity, creativity, and collaboration to power 5,000+ global brands. Driven by a customer-first mentality, we empower businesses to achieve their goals and grow alongside us. If you're ready to make a significant impact and be part of our transformative journey, Maropost is the place for you. Become a part of Maropost today and help shape the future of commerce! What You'll Be Responsible For Build and manage a REST API stack for Maropost Web Apps. Given the architecture strategy related to our big data, analytics and cloud native product vision, work on the concrete architecture design and, when necessary, prototype it Understanding of systems architecture and ability to design scalable, performance-driven solutions. Drive innovation within the engineering team, identifying opportunities to improve processes, tools, and technologies Drive the architecture and design governance for systems and products under scope, as well as code and design reviews. Technical leadership of the development team and ensuring that they follow industry-standard best practices Evaluating and improving the tools and frameworks used in software development Design, develop and architect complex web applications Integrate with ML and NLP engines. DevOps, DBMS & Scaling on Azure or GCP. What You'll Bring To Maropost B.E./B.Tech 5+ years of experience with building, including designing and architecting backend applications, web apps, and analytics, preferably in the commerce cloud or marketing automation domain. Experience in deploying applications at scale in production systems. Experience with platform security capabilities (TLS, SSL etc.) Experience of high-performance web-scale & real-time response systems Experience in building and managing API endpoints for multimodal clients. Enthusiasm to learn and contribute to a challenging & fun-filled startup. A knack for problem-solving and following efficient coding practices. Very strong interpersonal communication and collaboration skills Advanced HLD, LLD, and Design Patterns knowledge is a must. Hands-on experience with tech stacks—RoR and PostgreSQL Hands-on Experience (Advantageous) Open-source databases and caching: Redis, Memcache, MySQL Cloud services: Managing infrastructure with basic services from GCP or AWS, such as VMs, Kubernetes clusters, and Load Balancers. Monitoring and observability tools: Prometheus, Grafana, Loki, OpenTelemetry. Open-source reverse proxies/API Gateways: HAProxy, Nginx, Traefik, Caddy, KrakenD. Open-source WAF tools and firewalls: Fail2ban, ModSecurity, Coraza. Frontend technologies: HTML, CSS, JavaScript, React JS, Vue JS. Network protocols and libraries: HTTP, WebSocket, Socket.IO . Version control and CI/CD: Git, Jenkins, Argo CD, Spinnaker, Terraform What’s in it for you? You will have the autonomy to take ownership of your role and contribute to the growth and success of our brand. If you are driven to make an immediate impact, achieve results, thrive in a high performing team and want to grow in a dynamic and rewarding environment – You belong to Maropost! Show more Show less

Posted 1 day ago

Apply

5.0 years

0 - 0 Lacs

Panaji

On-site

Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). About the Role We’re creating an internal platform that turns data-heavy engineering workflows—currently spread across spreadsheets, PDFs, e-mail, and third-party portals—into streamlined, AI-assisted services. You’ll own large pieces of that build: bringing data in, automating analysis with domain–specific engines, integrating everyday business tools, and partnering with a data analyst to fine-tune custom language models. The work is hands-on and highly autonomous; you’ll design, code, deploy, and iterate features that remove manual effort for our engineering and project-management teams. What You’ll Do AI & LLM Workflows – prototype and deploy large-language-model services for document parsing, validation, and natural-language Q&A. Automation Services – build Python micro-services that convert unstructured project files into structured stores and trigger downstream calculation tools through their APIs. Enterprise Integrations – connect calendars, project-tracking portals, and document libraries via REST / Graph APIs and event streams. DevOps & Cloud – containerize workloads, write CI/CD pipelines, codify infrastructure (Terraform/CloudFormation) and keep runtime costs in check. Quality & Security – maintain tests, logging, RBAC, encryption, and safe-prompt patterns. Collaboration – document designs clearly, demo working proofs to stakeholders, and coach colleagues on AI-assisted development practices. You’ll Need 5+ years professional software-engineering experience, including 3+ years Python. Proven track record shipping AI / NLP / LLM solutions (OpenAI, Azure OpenAI, Hugging Face, or similar). Practical DevOps skills: Docker, Git, CI/CD pipelines, and at least one major cloud platform. Experience integrating external SDKs or vendor APIs (engineering, GIS, or document-management domains preferred). Strong written / verbal communication and the discipline to work independently from loosely defined requirements. Nice-to-Have Exposure to engineering or construction data (drawings, 3-D models, load calculations, etc.). Modern front-end skills (React / TypeScript) for dashboard or viewer components. Familiarity with Power Automate, Graph API, or comparable workflow tools. How We Work Autonomy + Ownership – plan your own sprints, defend technical trade-offs, own deliverables end-to-end. AI-Augmented Development – we encourage daily use of coding copilots and chat-based problem solving for speed and clarity. If you enjoy blending practical software engineering with cutting-edge AI tooling to eliminate repetitive work, we’d like to meet you. Job Types: Full-time, Permanent Pay: ₹80,000.00 - ₹90,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 30/06/2025

Posted 1 day ago

Apply

10.0 years

3 - 5 Lacs

Cochin

On-site

Introduction We are looking for candidates with 10 +years of experience in data architect role. Responsibilities include: Design and implement scalable, secure, and cost-effective data architectures using GCP. Lead the design and development of data pipelines with BigQuery, Dataflow, and Cloud Storage. Architect and implement data lakes, data warehouses, and real-time data processing solutions on GCP. Ensure data architecture aligns with business goals, governance, and compliance requirements. Collaborate with stakeholders to define data strategy and roadmap. Design and deploy BigQuery solutions for optimized performance and cost efficiency. Build and maintain ETL/ELT pipelines for large-scale data processing. Leverage Cloud Pub/Sub, Dataflow, and Cloud Functions for real-time data integration. Implement best practices for data security, privacy, and compliance in cloud environments. Integrate machine learning workflows with data pipelines and analytics tools. Define data governance frameworks and manage data lineage. Lead data modeling efforts to ensure consistency, accuracy, and performance across systems. Optimize cloud infrastructure for scalability, performance, and reliability. Mentor junior team members and ensure adherence to architectural standards. Collaborate with DevOps teams to implement Infrastructure as Code (Terraform, Cloud Deployment Manager). Ensure high availability and disaster recovery solutions are built into data systems. Conduct technical reviews, audits, and performance tuning for data solutions. Design solutions for multi-region and multi-cloud data architecture. Stay updated on emerging technologies and trends in data engineering and GCP. Drive innovation in data architecture, recommending new tools and services on GCP. Certifications : Google Cloud Certification is Preferred. Primary Skills : 7+ years of experience in data architecture, with at least 3 years in GCP environments. Expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services. Strong experience in data warehousing, data lakes, and real-time data pipelines. Proficiency in SQL, Python, or other data processing languages. Experience with cloud security, data governance, and compliance frameworks. Strong problem-solving skills and ability to architect solutions for complex data environments. Google Cloud Certification (Professional Data Engineer, Professional Cloud Architect) preferred. Leadership experience and ability to mentor technical teams. Excellent communication and collaboration skills.

Posted 1 day ago

Apply

5.0 years

6 - 7 Lacs

Thiruvananthapuram

On-site

5 - 7 Years 1 Opening Trivandrum Role description Role Proficiency: Act under guidance of Lead II/Architect understands customer requirements and translate them into design of new DevOps (CI/CD) components. Capable of managing at least 1 Agile Team Outcomes: Interprets the DevOps Tool/feature/component design to develop/support the same in accordance with specifications Adapts existing DevOps solutions and creates own DevOps solutions for new contexts Codes debugs tests documents and communicates DevOps development stages/status of DevOps develop/support issues Select appropriate technical options for development such as reusing improving or reconfiguration of existing components Optimises efficiency cost and quality of DevOps process tools and technology development Validates results with user representatives; integrates and commissions the overall solution Helps Engineers troubleshoot issues that are novel/complex and are not covered by SOPs Design install configure troubleshoot CI/CD pipelines and software Able to automate infrastructure provisioning on cloud/in-premises with the guidance of architects Provides guidance to DevOps Engineers so that they can support existing components Work with diverse teams with Agile methodologies Facilitate saving measures through automation Mentors A1 and A2 resources Involved in the Code Review of the team Measures of Outcomes: Quality of deliverables Error rate/completion rate at various stages of SDLC/PDLC # of components/reused # of domain/technology certification/ product certification obtained SLA for onboarding and supporting users and tickets Outputs Expected: Automated components : Deliver components that automat parts to install components/configure of software/tools in on premises and on cloud Deliver components that automate parts of the build/deploy for applications Configured components: Configure a CI/CD pipeline that can be used by application development/support teams Scripts: Develop/Support scripts (like Powershell/Shell/Python scripts) that automate installation/configuration/build/deployment tasks Onboard users: Onboard and extend existing tools to new app dev/support teams Mentoring: Mentor and provide guidance to peers Stakeholder Management: Guide the team in preparing status updates keeping management updated about the status Training/SOPs : Create Training plans/SOPs to help DevOps Engineers with DevOps activities and in onboarding users Measure Process Efficiency/Effectiveness: Measure and pay attention to efficiency/effectiveness of current process and make changes to make them more efficiently and effectively Stakeholder Management: Share the status report with higher stakeholder Skill Examples: Experience in the design installation configuration and troubleshooting of CI/CD pipelines and software using Jenkins/Bamboo/Ansible/Puppet /Chef/PowerShell /Docker/Kubernetes Experience in Integrating with code quality/test analysis tools like Sonarqube/Cobertura/Clover Experience in Integrating build/deploy pipelines with test automation tools like Selenium/Junit/NUnit Experience in Scripting skills (Python/Linux/Shell/Perl/Groovy/PowerShell) Experience in Infrastructure automation skill (ansible/puppet/Chef/Powershell) Experience in repository Management/Migration Automation – GIT/BitBucket/GitHub/Clearcase Experience in build automation scripts – Maven/Ant Experience in Artefact repository management – Nexus/Artifactory Experience in Dashboard Management & Automation- ELK/Splunk Experience in configuration of cloud infrastructure (AWS/Azure/Google) Experience in Migration of applications from on-premises to cloud infrastructures Experience in Working on Azure DevOps/ARM (Azure Resource Manager)/DSC (Desired State Configuration)/Strong debugging skill in C#/C Sharp and Dotnet Setting and Managing Jira projects and Git/Bitbucket repositories Skilled in containerization tools like Docker/Kubernetes Knowledge Examples: Knowledge of Installation/Config/Build/Deploy processes and tools Knowledge of IAAS - Cloud providers (AWS/Azure/Google etc.) and their tool sets Knowledge of the application development lifecycle Knowledge of Quality Assurance processes Knowledge of Quality Automation processes and tools Knowledge of multiple tool stacks not just one Knowledge of Build Branching/Merging Knowledge about containerization Knowledge on security policies and tools Knowledge of Agile methodologies Additional Comments: Experience preferred: 5+ Years Language: Must have expert knowledge of either Go or Java and have some knowledge of two others. • Go • Java • Python • C programming & Golang(Basic knowledge) Infra: • Brokers: Must have some experience and preferably mastery in at least one product. We use RabbitMQ and MQTT (Mosquitto). Prefer experience with edge deployments of brokers because the design perspective is different when it comes to persistence, hardware, and telemetry • Linux Shell/Scripting • Docker • Kubernetes k8s – Prefer experience with Edge deployments, must have some mastery in this area or in Docker • K3s (nice-to-have) Tooling: • Gitlab CI/CD Automation • Dashboard building – In any system, someone who can take raw data and make something presentable and usable for production support Nice to have: • Ansible • Terraform Responsibilities: • KTLO activities for existing RabbitMQ and MQTT instances including annual PCI, patching and upgrades, monitoring library upgrades of applications, production support, etc. • Project work for RabbitMQ and MQTT instances including: Library enhancements - In multiple languages Security enhancements – Right now, we are setting up the hardened cluster including all of the security requested changes - Telemetry, monitoring, dashboarding, reporting. Skills Java,Devops,Rabbitmq About UST UST is a global digital transformation solutions provider. For more than 20 years, UST has worked side by side with the world’s best companies to make a real impact through transformation. Powered by technology, inspired by people and led by purpose, UST partners with their clients from design to operation. With deep domain expertise and a future-proof philosophy, UST embeds innovation and agility into their clients’ organizations. With over 30,000 employees in 30 countries, UST builds for boundless impact—touching billions of lives in the process.

Posted 1 day ago

Apply

8.0 years

0 Lacs

India

On-site

Job Summary: We are looking for an experienced Cloud Platform Lead to spearhead the design, implementation, and governance of scalable, secure, and resilient cloud-native platforms on Azure . This role requires deep technical expertise in Azure services , Kubernetes (AKS) , containers , Gateway, Frontdoor, WAF , and API management , along with the ability to lead cross-functional initiatives and define cloud platform strategy and best practices. Key Responsibilities: ● Lead the architecture, development, and operations of Azure-based cloud platforms across environments (dev, staging, production). ● Design and manage Azure Front Door , Application Gateway , and WAF to ensure global performance, availability, and security. ● Design and implement Kubernetes platform (AKS) , ensuring reliability, observability, and governance of containerized workloads. ● Drive adoption and standardization of Azure API Management for secure and scalable API delivery. ● Collaborate with security and DevOps teams to implement secure-by-design cloud practices, including WAF rules , RBAC , and network isolation . ● Guide and mentor engineers in Kubernetes, container orchestration, CI/CD pipelines, and Infrastructure as Code (IaC). ● Define and implement monitoring, logging, and alerting best practices using tools like Azure Monitor , ELK, Signoz ● Evaluate and introduce tools, frameworks, and standards to continuously evolve the cloud platform. ● Participate in cost optimization and performance tuning initiatives for cloud services. Required Skills & Qualifications: ● 8+ years of experience in cloud infrastructure or platform engineering, including at least 4+ years in a leadership or ownership role . ● Deep hands-on expertise with Azure Front Door , Application Gateway , Web Application Firewall (WAF) , and Azure API Management . ● Strong experience with Kubernetes and Azure Kubernetes Service (AKS) , including networking, autoscaling, and security. ● Proficient with Docker and container orchestration principles. ● Infrastructure-as-Code experience with Terraform , ARM Templates , or Bicep . ● Excellent understanding of cloud security, identity (AAD, RBAC), and compliance. ● Experience building and guiding CI/CD workflows using tools like Azure DevOps and Bitbucket Ci/CD, or similar. Education B Tech / BE/ M Tech / MCA Job Type: Full-time Schedule: Day shift Application Question(s): What is your total years of experience what is the relevant years of experience what is your current CTC What is your expected CTC How long is the notice period How many years of experience in Azure Front Door, Application Gateway, Web Application Firewall (WAF), and Azure API Management. How many years of experience in Terraform, ARM Templates, or Bicep. How many years of experience in Kubernetes and Azure Kubernetes Service (AKS). How many years of experience in designing and implementing Azure architecture for production grade application on Kubernetes. How many years of experience in Docker and container orchestration principles. Work Location: In person

Posted 1 day ago

Apply

1.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Who are we and what do we do? BrowserStack is the world’s leading cloud-based software testing platform, empowering over 50,000 customers—including Amazon, Microsoft, Meta, and Google—to deliver high-quality software at speed. Founded in 2011 by Ritesh Arora and Nakul Aggarwal, the company has grown to support more than two million tests daily across 21 global data centers, providing instant access to 35,000+ real devices and browsers. With over 1,200 employees and a remote-first approach, BrowserStack operates at the intersection of scale, reliability, and innovation. Its suite of products spans manual and automated testing, visual regression, accessibility, and test management—all designed to simplify the testing process for modern development teams. Behind the scenes, BrowserStack continues to push the boundaries with AI capabilities like smart test case generation and design, flakiness detection, auto-healing and more —helping teams reduce maintenance overhead, debug faster, and catch issues earlier in the development lifecycle. Recognized for its innovation and growth, BrowserStack has been named to the Forbes Cloud 100 list for four consecutive years. With backing from investors like Accel, Bond, and Insight Partners, the company continues to expand its product offerings and global footprint. Joining BrowserStack means being part of a mission-driven team dedicated to shaping the future of software testing. NOTE : This position is for Mumbai (Remote), please apply only if are from Mumbai or open to relocate to Mumbai. Desired Experience Experience of 1 - 3 years Strong knowledge in: Python and Bash (or similar Unix shell) Working experience with: Ansible, Terraform, Docker, Kubernetes, Prometheus and Cloud platforms like AWS, GCP, Nagios, Jenkins and CI/CD pipelines Good to have: Virtualisation tools like KVM, ESXi Good knowledge of Linux operating systems and networking concepts The drive and self-motivation to understand the intricate details of a complex infrastructure environment. Aggressive problem diagnosis and creative problem-solving skills Startup mentality, high willingness to learn, and hardworking What will you do? Work on AWS Kubernetes to manage our growing fleet of clusters globally Identify areas of improvement in our frameworks, tools, processes and strive to make them better. Evaluate our success metrics and evolve our reporting systems Lead incident response efforts, working closely with cross-functional teams to resolve issues quickly and minimize downtime. Implement effective incident management processes and post-incident reviews Participate in on-call rotation responsibilities, ensuring timely identification and resolution of infrastructure issues Collaborate with the internal team, stakeholders, and partners to implement effective solutions. Provide daily support to customers as they onboard and use our platforms, helping them optimize value, performance, and reliability for their workloads. Contribute to enhancing our platforms' capabilities, prioritizing reliability and scalability. Exhibit strong communication skills and maintain a support-oriented approach when interacting with both technical and non-technical audiences. Benefits In addition to your total compensation, you will be eligible for following benefits, which will be governed by the Company policy: Medical insurance for self, spouse, upto 2 dependent children and Parents or Parents-in-law up to INR 5,00,000 Gratuity as per payment of Gratuity Act, 1972 Unlimited Time Off to ensure our people invest in their wellbeing, to rest and rejuvenate, spend quality time with family and friends Remote-First work environment in India Remote-First Benefit for home office setup, connectivity, accessories, co-working spaces, wellbeing to ensure an amazing remote work experience Show more Show less

Posted 1 day ago

Apply

130.0 years

4 - 7 Lacs

Hyderābād

On-site

Job Description Senior Manager- DevSecOps The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centres focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centres are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview The Products are the ecosystem enabling data and analytics for Manufacturing division within company’s product line. The enabling capabilities are managed as three distinct “products” – Data Platform, Data Engineering, and Digital Products – utilizing agile methodologies and DevSecOps practices. As a Senior Manager, DevSecOps you will be responsible for establishing and managing the required DevSecOps processes and practices for this ecosystem in context of the product model. What will you do in this role Automate and optimize the end-to-end development, testing, deployment, and operational processes utilizing standard DevSecOps tools by collaborating with technology organization. Collaborate with Risk Management organization to incorporate appropriate security and compliance focus and requirements in all capabilities. Conduct a periodic review and escalation of operational risks associated with product delivery and the adoption of capabilities. Identify opportunities to improve operational efficiency and cost of ownership for the platform; coordinate and drive the implementation of improvement opportunities by working with product managers, delivery, and operations squads. Is responsible for design and monitoring of the operational KPIs. Serve as the point of contact with company’s Cloud Services for operational activities and coordination of AWS configuration management related items. Support product delivery with deployment of Infrastructure roll out and upgrades in co-ordination with other departments. Support service management processes by collaborating with the compliance chapter. Take on the role of a CI/CD administrator, responsible for enforcing the usage of the CI/CD stack within established processes and ensuring the implementation of least privileged access. When required coordinate the resolution of complex issues and problems as they arise across the platform. What should you have Bachelor's degree in information technology, Computer Science or any Technology stream. 7+ years of overall work experience in IT 2+ of experience related to data and analytics products and platforms Knowledge of, and experience with DevSecOps concepts Knowledge and experience with Agile practices and tools Experience in the technical areas of: Python - intermediate CI/CD stack - intermediate SQL - intermediate Terminal/CLI - intermediate Terraform - intermediate Config management - intermediate Container management (EKS/docker) - intermediate ITIL - intermediate Infrastructure paradigm - SaaS/PaaS/IaaS. Preferred: Prior working knowledge and experience with implementing analytics applications utilizing big data platforms in AWS and related tools Prior experience with applications and technology support and operations Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Asset Management, Benefits Management, Business, Business Administration, Business Management, Human Resource Management, Management Process, Management System Development, Product Management, Requirements Management, Social Collaboration, Stakeholder Relationship Management, Strategic Planning, System Designs Preferred Skills: Job Posting End Date: 09/3/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R351488

Posted 1 day ago

Apply

8.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Description Enterprise Infrastructure Services (EIS) at Chubb is focussed at delivering services across multiple disciplines at Chubb. Cloud Engineering is one of the key services that are responsible for delivering cloud-based services on-prem as well as off prem. As part of the continued transformation, Chubb is increasing the pace of application transformation into containers and cloud adoption. As such we are seeking an experienced Cloud Engineer who can be part of this exciting journey at Chubb. As an experienced, hands-on cloud engineer, you will be responsible for both Infrastructure automation and container platform adoption at Chubb. A successful candidate would have hands-on experience of container platforms (Kubernetes), cloud platforms (Azure), and experience with software development and DevOps enablement through automation and Infrastructure as code. Successful candidate will also have opportunity to build and innovate solutions around various Infrastructure problems right from developer experience to operational excellence across the services provided by the cloud engineering team. Responsibilities Work on the cloud transformation projects across cloud engineering to provide automation and self service Implement automation and self-service capabilities using CI/CD pipelines for Infrastructure Write and maintain Terraform based Infrastructure as Code Build Operational capabilities around the Cloud platform for handing over to Operations after release Document and Design controls and governance policies around Azure platform and automate deployments of the policies Manage the end user collaboration, conduct regular sessions to educate end users on services and automation capabilities Find opportunities for automating away manual tasks Attend escalations from support teams and providing assistance during major production issues from an engineering perspective Key Requirements Experience with large cloud transformation projects preferably in Azure Extensive experience with cloud platforms, mainly Azure. Strong understanding of Azure services with demonstrated experience in AKS, App Services, LogicApps, IAM, Loadbalancers, AppGateway, NSG, storage and Azure Key Vault. Knowledge of networking concepts and protocols, including VNet, DNS, and load balancing. Writing Infrastructure as Code and pipelines preferably using Terraform, Ansible, Bash, Python and Jenkins Have written and executed Terraform based Infrastructure as Code Ability to work in both windows and Linux environment with Container platform such as Kubernetes, AKS, GKE DevOps experience with ability to use Github, Jenkins and Nexus for pipeline automation and artifact management Implementation experience of secure transports using TLS and encryption along with authentication/authorization flow Experience in certificate management for containerized applications Experience with Jenkins and similar CICD tools. Experience in GitOps would be an added advantage Good to have Python coding experience in automation or any area. Education and Qualification Bachelors degree in Computer Science, Computer Engineering, Information Technology or relevant field Minimum of 8 years of experience in IT automation with 2 years supporting Azure based cloud automation and 2 years of Kubernetes and Docker Relevant Azure Certifications · Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Summary Location: Hyderabad To work in AWS platform managing SAP workloads and develop automation scripts using AWS services. Support 24*7 environment and be ready to learn newer technologies. About the Role Major Accountabilities Solve incidents and perform changes in AWS Cloud environment. Own and drive incidents and its resolution. Must have extensive knowledge in workings of performance troubleshooting and capacity management. Champion the standardization and simplification of AWS Operations involving various services including S3, EC2, EBS, Lamda, Network, NACL, Security Groups and others. Prepare and run internal AWS projects, identify critical integration points and dependencies, propose solutions for key gaps, provide effort estimations while ensuring alignment with business and other teams Assure consistency and traceability between user requirements, functional specifications, Agile ways of working and adapting to DevSecOps, architectural roadmaps, regulatory/control requirements, and smooth transition of solutions to operations Deliver assigned project work as per agreed timeline within budget and on-quality adhering to following the release calendars Able to work in dynamic environment and supporting users across the globe. Should be a team player. Weekend on-call duties would be applicable as needed. Minimum Requirements Bachelor’s degree in business/technical domains AWS Cloud certifications / trainings. Able to handle OS security vulnerabilities and administer the patches and upgrades > 5 years of relevant professional IT experience in the related technical area Proven experience in handling AWS Cloud workload, preparing Terraform scripts and running pipelines. Excellent troubleshooting skills and be independently able to solve P1/P2 incidents. Have working knowledge on of DR, Cluster, SuSe Linux and tools associated within AWS ecosystem Knowledge of handling SAP workloads would be added advantage. Extensive monitoring experience and should have worked in 24*7 environment in the past Experience with installaing and setting up SAP environment in AWS Cloud. EC2 Instance setup, EBS and EFS Setup, S3 configuration Alert Configuration in Cloud Watch. Management of extending filesystems and adding new HANA instance. Capacity / Consumption Management, Manage AWS Cloud accounts along with VPC, Subnets and NAT Good knowledge on NACL and Security Groups,Usage of Cloud Formation and automation piplelines,Identify and Access Management. Create and manage Multi-Factor Authentication Good understanding of ITIL v4 principles and able to work on complex 24*7 environment. Proven track record of broad industry experience and excellent understanding of complex enterprise IT landscapes and relationships Why consider Novartis? Our purpose is to reimagine medicine to improve and extend people’s lives and our vision is to become the most valued and trusted medicines company in the world. How can we achieve this? With our people. It is our associates that drive us each day to reach our ambitions. Be a part of this mission and join us! Learn more here: https://www.novartis.com/about/strategy/people-and-culture Commitment to Diversity and Inclusion: Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve. Join our Novartis Network: If this role is not suitable to your experience or career goals but you wish to stay connected to hear more about Novartis and our career opportunities, join the Novartis Network here: https://talentnetwork.novartis.com/network Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division Operations Business Unit CTS Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Technology Transformation Job Type Full time Employment Type Regular Shift Work No

Posted 1 day ago

Apply

8.0 years

28 - 30 Lacs

Hyderābād

On-site

Experience - 8+ Years Budget - 30 LPA (Including Variable Pay) Location - Bangalore, Hyderabad, Chennai (Hybrid) Shift Timing - 2 PM - 11 PM ETL Development Lead (8+ years) Experience with Leading and mentoring a team of Talend ETL developers. Providing technical direction and guidance on ETL/Data Integration development to the team. Designing complex data integration solutions using Talend & AWS. Collaborating with stakeholders to define project scope, timelines, and deliverables. Contributing to project planning, risk assessment, and mitigation strategies. Ensuring adherence to project timelines and quality standards. Strong understanding of ETL/ELT concepts, data warehousing principles, and database technologies. Design, develop, and implement ETL (Extract, Transform, Load) processes using Talend Studio and other Talend components. Build and maintain robust and scalable data integration solutions to move and transform data between various source and target systems (e.g., databases, data warehouses, cloud applications, APIs, flat files). Develop and optimize Talend jobs, workflows, and data mappings to ensure high performance and data quality. Troubleshoot and resolve issues related to Talend jobs, data pipelines, and integration processes. Collaborate with data analysts, data engineers, and other stakeholders to understand data requirements and translate them into technical solutions. Perform unit testing and participate in system integration testing of ETL processes. Monitor and maintain Talend environments, including job scheduling and performance tuning. Document technical specifications, data flow diagrams, and ETL processes. Stay up-to-date with the latest Talend features, best practices, and industry trends. Participate in code reviews and contribute to the establishment of development standards. Proficiency in using Talend Studio, Talend Administration Center/TMC, and other Talend components. Experience working with various data sources and targets, including relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL), NoSQL databases, AWS cloud platform, APIs (REST, SOAP), and flat files (CSV, TXT). Strong SQL skills for data querying and manipulation. Experience with data profiling, data quality checks, and error handling within ETL processes. Familiarity with job scheduling tools and monitoring frameworks. Excellent problem-solving, analytical, and communication skills. Ability to work independently and collaboratively within a team environment. Basic Understanding of AWS Services i.e. EC2 , S3 , EFS, EBS, IAM , AWS Roles , CloudWatch Logs, VPC, Security Group , Route 53, Network ACLs, Amazon Redshift, Amazon RDS, Amazon Aurora, Amazon DynamoDB. Understanding of AWS Data integration Services i.e. Glue, Data Pipeline, Amazon Athena , AWS Lake Formation, AppFlow, Step Functions Preferred Qualifications: Experience with Leading and mentoring a team of 8+ Talend ETL developers. Experience working with US Healthcare customer.. Bachelor's degree in Computer Science, Information Technology, or a related field. Talend certifications (e.g., Talend Certified Developer), AWS Certified Cloud Practitioner/Data Engineer Associate. Experience with AWS Data & Infrastructure Services.. Basic understanding and functionality for Terraform and Gitlab is required. Experience with scripting languages such as Python or Shell scripting. Experience with agile development methodologies. Understanding of big data technologies (e.g., Hadoop, Spark) and Talend Big Data platform. Job Type: Full-time Pay: ₹2,800,000.00 - ₹3,000,000.00 per year Schedule: Day shift Work Location: In person

Posted 1 day ago

Apply

0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design, build, and maintain scalable, reliable, and secure infrastructure using tools like Terraform and Ansible. Automate deployment pipelines and infrastructure provisioning. Develop and maintain Continuous Integration/Continuous Deployment (CI/CD) pipelines using tools like Jenkins Ensure smooth and efficient code deployment processes. Manage and optimize Google Cloud infrastructure. Implement cost optimization strategies and monitor cloud resource usage. Work closely with development teams to ensure smooth integration of new features and services. Troubleshoot and resolve infrastructure and application issues. Implement and maintain secrets management tools like HashiCorp Vault. Hands-on experience with version control systems like Git. Requirements To be successful in this role, you should meet the following requirements: Proficiency in scripting languages like Python, PySpark, Shell / Bash and YAML. Experience with containerization tools like Docker and orchestration platforms like Kubernetes. Proven experience with GCP specifically with BigQuery. Strong working knowledge of Google Cloud, Python, BigQuery, KubeFlow Strong SQL skills and experience querying large datastes in BigQuery. Good working knowledge of GitHub DevOps principles & automation tools (Jenkins, Ansible, Nexus, CI/CD) Terraform working experience to setup the infrastructure. Agile development principles (Scrum, Jira, Confluence) ESSENTIAL SKILLS (non-technical) Excellent communication skills Ability to explain complex ideas Ability to work as part of a team Ability to work in a team that is located across multiple regions / time zones Willingness to adapt and learn new things Willingness to take ownership of tasks Strong collaboration skills and experience working in diverse, global teams. Excellent problem-solving skills and ability to work independently and as part of a team. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 day ago

Apply

3.0 - 6.0 years

5 - 7 Lacs

Hyderābād

On-site

CORE BUSINESS OPERATIONS The Core Business Operations (CBO) portfolio is an integrated set of offerings that addresses our clients’ heart-of-the-business issues. This portfolio combines our functional and technical capabilities to help clients transform, modernize, and run their existing technology platforms across industries. As our clients navigate dynamic and disruptive markets, these solutions are designed to help them drive product and service innovation, improve financial performance, accelerate speed to market, and operate their platforms to innovate continuously. ROLE Level: Consultant As a Consultant at Deloitte Consulting, you will be responsible for individually delivering high quality work products within due timelines in an agile framework. Need-basis consultants will be mentoring and/or directing junior team members/liaising with onsite/offshore teams to understand the functional requirements. As an AWS Infrastructure Engineer, you play a crucial role in building, and maintaining a cloud infrastructure on Amazon Web Services (AWS). You will also be responsible for the ownership of tasks assigned through SNOW, Dashboard, Order forms etc. The work you will do includes: Build and operate the Cloud infrastructure on AWS Continuously monitoring the health and performance of the infrastructure and resolving any issues. Using tools like CloudFormation, Terraform, or Ansible to automate infrastructure provisioning and configuration. Administer the EC2 instance’s OS such as Windows and Linux Working with other teams to deploy secure, scalable, and cost-effective cloud solutions based on AWS services. Implement monitoring and logging for Infra and Apps Keeping the infrastructure up-to-date with the latest security patches and software versions. Collaborate with development, operations and Security teams to establish best practices for software development, build, deployment, and infrastructure management Tasks related to IAM, Monitoring, Backup and Vulnerability Remediation Participating in performance testing and capacity planning activities Documentation, Weekly/Bi-Weekly Deck preparation, KB article update Handover and On call support during weekends on rotational basis QUALIFICATIONS Skills / Project Experience: Must Have: 3 - 6 years of hands-on experience in AWS Cloud, Cloud Formation template, Windows/Linux administration Understanding of 2 tier, 3 tier or multi-tier architecture Experience on IaaS/PaaS/SaaS Understanding of Disaster recovery Networking and security expertise Knowledge on PowerShell, Shell and Python Associate/Professional level certification on AWS solution architecture ITIL Foundational certification Good interpersonal and communication skills Flexibility to adapt and apply innovation to varied business domain and apply technical solutioning and learnings to use cases across business domains and industries Knowledge and experience working with Microsoft Office tools Good to Have: Understanding of container technologies such as Docker, Kubernetes and OpenShift. Understanding of Application and other infrastructure monitoring tools Understanding of end-to-end infrastructure landscape Experience on virtualization platform Knowledge on Chef, Puppet, Bamboo, Concourse etc Knowledge on Microservices, DataLake, Machine learning etc Education: B.E./B. Tech/M.C.A./M.Sc (CS) degree or equivalent from accredited university Prior Experience: 3 – 6 years of experience working with AWS, System administration, IaC etc Location: Hyderabad/ Pune The team Deloitte Consulting LLP’s Technology Consulting practice is dedicated to helping our clients build tomorrow by solving today’s complex business problems involving strategy, procurement, design, delivery, and assurance of technology solutions. Our service areas include analytics and information management, delivery, cyber risk services, and technical strategy and architecture, as well as the spectrum of digital strategy, design, and development services Core Business Operations Practice optimizes clients’ business operations and helps them take advantage of new technologies. Drives product and service innovation, improves financial performance, accelerates speed to market, and operates client platforms to innovate continuously. Learn more about our Technology Consulting practice on www.deloitte.com For information on CBO visit - https://www.youtube.com/watch?v=L1cGlScLuX0 For information on life of an Analyst at CBO visit- https://www.youtube.com/watch?v=CMe0DkmMQHI Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 302308

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Hyderābād

On-site

Be a part of a team that’s ensuring Dell Technologies' product integrity and customer satisfaction. Our IT Software Engineer team turns business requirements into technology solutions by designing, coding and testing/debugging applications, as well as documenting procedures for use and constantly seeking quality improvements. Join us to do the best work of your career and make a profound social impact as a Software Engineer 2-IT on our Software Engineer-IT Team in Hyderabad What you’ll achieve As an IT Software Engineer, you will deliver products and improvements for a changing world. Working at the cutting edge, you will craft and develop software for platforms, peripherals, applications and diagnostics — all with the most sophisticated technologies, tools, software engineering methodologies and partnerships. You will: Work with complicated business applications across functional areas Take design from concept to production, which may include design reviews, feature implementation, debugging, testing, issue resolution and factory support Manage design and code reviews with a focus on the best user experience, performance, scalability and future expansion Take the first step towards your dream career Every Dell Technologies team member brings something unique to the table. Here’s what we are looking for with this role: Essential Requirements Strong experience in scripting languages like Bash, Python, or Groovy for automation. Working experience of Git-Based workflows and understanding of CI/CD pipelines, runners, and YAML configuration. Hands-on experience with Docker/Kaniko , Kubernetes and microservice-based deployments. Strong knowledge with GitLab, Ansible, Terraform, and monitoring tools like Prometheus and Grafana. Experience troubleshooting deployment or integration issues and optimize CI/CD pipelines efficiently Desirable Requirements 3 to 5 years of experience in software/coding/IT software Who we are We believe that each of us has the power to make an impact. That’s why we put our team members at the center of everything we do. If you’re looking for an opportunity to grow your career with some of the best minds and most advanced tech in the industry, we’re looking for you. Dell Technologies is a unique family of businesses that helps individuals and organizations transform how they work, live and play. Join us to build a future that works for everyone because Progress Takes All of Us. Application closing date: 30-July-25 Dell Technologies is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. Job ID: R270155

Posted 1 day ago

Apply

6.0 - 10.0 years

0 Lacs

Delhi

On-site

Job requisition ID :: 84234 Date: Jun 15, 2025 Location: Delhi Designation: Senior Consultant Entity: What impact will you make? Every day, your work will make an impact that matters, while you thrive in a dynamic culture of inclusion, collaboration and high performance. As the undisputed leader in professional services, Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential Deloitte is where you will find unrivaled opportunities to succeed and realize your full potential. The Team Deloitte’s Technology & Transformation practice can help you uncover and unlock the value buried deep inside vast amounts of data. Our global network provides strategic guidance and implementation services to help companies manage data from disparate sources and convert it into accurate, actionable information that can support fact-driven decision-making and generate an insight-driven advantage. Our practice addresses the continuum of opportunities in business intelligence & visualization, data management, performance management and next-generation analytics and technologies, including big data, cloud, cognitive and machine learning. Learn more about Analytics and Information Management Practice Work you’ll do As a Senior Consultant in our Consulting team, you’ll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations. You’ll: We are seeking a highly skilled Senior AWS DevOps Engineer with 6-10 years of experience to lead the design, implementation, and optimization of AWS cloud infrastructure, CI/CD pipelines, and automation processes. The ideal candidate will have in-depth expertise in Terraform, Docker, Kubernetes, and Big Data technologies such as Hadoop and Spark. You will be responsible for overseeing the end-to-end deployment process, ensuring the scalability, security, and performance of cloud systems, and mentoring junior engineers. Overview: We are seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Exp- 2 to 7 years Location- Bangalore, Chennai, Coimbatore, Delhi, Mumbai, Bhubaneswar. Key Responsibilities: 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Required Qualifications: 1. bachelor’s degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions Big Data: Hadoop, Spark, Delta Lake Programming: Python, PySpark Databases: SQL, PostgreSQL, NoSQL Data Warehousing and Analytics ETL/ELT processes Data Lake architectures Version control: Github Your role as a leader At Deloitte India, we believe in the importance of leadership at all levels. We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society and make an impact that matters. In addition to living our purpose, Senior Consultant across our organization: Develop high-performing people and teams through challenging and meaningful opportunities Deliver exceptional client service; maximize results and drive high performance from people while fostering collaboration across businesses and borders Influence clients, teams, and individuals positively, leading by example and establishing confident relationships with increasingly senior people Understand key objectives for clients and Deloitte; align people to objectives and set priorities and direction. Acts as a role model, embracing and living our purpose and values, and recognizing others for the impact they make How you will grow At Deloitte, our professional development plan focuses on helping people at every level of their career to identify and use their strengths to do their best work every day. From entry-level employees to senior leaders, we believe there is always room to learn. We offer opportunities to help build excellent skills in addition to hands-on experience in the global, fast-changing business world. From on-the-job learning experiences to formal development programs at Deloitte University, our professionals have a variety of opportunities to continue to grow throughout their career. Explore Deloitte University, The Leadership Centre. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our purpose Deloitte is led by a purpose: To make an impact that matters . Every day, Deloitte people are making a real impact in the places they live and work. We pride ourselves on doing not only what is good for clients, but also what is good for our people and the Communities in which we live and work—always striving to be an organization that is held up as a role model of quality, integrity, and positive change. Learn more about Deloitte's impact on the world Recruiter tips We want job seekers exploring opportunities at Deloitte to feel prepared and confident. To help you with your interview, we suggest that you do your research: know some background about the organization and the business area you are applying to. Check out recruiting tips from Deloitte professionals.

Posted 1 day ago

Apply

4.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company: Keka HR Website: Visit Website Business Type: Startup Company Type: Product Business Model: B2B Funding Stage: Series A Industry: HRMS Salary Range: ₹ 10-25 Lacs PA Job Description About the Role We are looking for a highly skilled Site Reliability Engineer (SRE) to lead the implementation and management of our observability stack across Azure-hosted infrastructure and .NET Core applications. This role will focus on configuring and managing Open Telemetry, Prometheus, Loki, and Tempo, along with setting up robust alerting systems across all services — including Azure infrastructure and MSSQL databases. You will work closely with developers, DevOps, and infrastructure teams to ensure the performance, reliability, and visibility of our .NET Core applications and cloud services. Key Responsibilities Observability Platform Implementation: Design and maintain distributed tracing, metrics, and logging using OpenTelemetry, Prometheus, Loki, and Tempo. Ensure complete instrumentation of .NET Core applications for end-to-end visibility. Implement telemetry pipelines for application logs, performance metrics, and traces. Monitoring & Alerting Develop and manage SLIs, SLOs, and error budgets. Create actionable, noise-free alerts using Prometheus Alertmanager and Azure Monitor. Monitor key infrastructure components, applications, and databases with a focus on reliability and performance. Azure & Infrastructure Integration: Integrate Azure services (App Services, VMs, Storage, etc.) with the observability stack. Configure monitoring for MSSQL databases, including performance tuning metrics and health indicators. Use Azure Monitor, Log Analytics, and custom exporters where necessary. Automation & DevOps Automate observability configurations using Terraform, PowerShell, or other IaC tools. Integrate telemetry validation and health checks into CI/CD pipelines. Maintain observability as code for repeatable deployments and easy scaling. Resilience & Reliability Engineering: Conduct capacity planning to anticipate scaling needs based on usage patterns and growth. Define and implement disaster recovery strategies for critical Azure-hosted services and databases. Perform load and stress testing to identify performance bottlenecks and validate infrastructure limits. Support release engineering by integrating observability checks and rollback strategies in CI/CD pipelines. Apply chaos engineering practices in lower environments to uncover potential reliability risks proactively. Collaboration & Documentation: Partner with engineering teams to promote observability best practices in .NET Core development. Create dashboards (Grafana preferred) and runbooks for system insights and incident response. Document monitoring standards, troubleshooting guides, and onboarding materials. Required Skills And Experience 4+ years of experience in SRE, DevOps, or infrastructure-focused roles. Deep experience with .NET Core application observability using OpenTelemetry. Proficiency with Prometheus, Loki, Tempo, and related observability tools. Strong background in Azure infrastructure monitoring, including App Services and VMs. Hands-on experience monitoring MSSQL databases (deadlocks, query performance, etc.). Familiarity with Infrastructure as Code (Terraform, Bicep) and scripting (PowerShell, Bash). Experience building and tuning alerts, dashboards, and metrics for production systems. Preferred Qualifications Azure certifications (e.g., AZ-104, AZ-400). Experience with Grafana, Azure Monitor, and Log Analytics integration. Familiarity with distributed systems and microservice architectures. Prior experience in high-availability, regulated, or customer-facing environments. Show more Show less

Posted 1 day ago

Apply

Exploring Terraform Jobs in India

Terraform, an infrastructure as code tool developed by HashiCorp, is gaining popularity in the tech industry, especially in the field of DevOps and cloud computing. In India, the demand for professionals skilled in Terraform is on the rise, with many companies actively hiring for roles related to infrastructure automation and cloud management using this tool.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities are known for their strong tech presence and have a high demand for Terraform professionals.

Average Salary Range

The salary range for Terraform professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 5-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 15 lakhs per annum.

Career Path

In the Terraform job market, a typical career progression can include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually, Architect. As professionals gain experience and expertise in Terraform, they can take on more challenging and leadership roles within organizations.

Related Skills

Alongside Terraform, professionals in this field are often expected to have knowledge of related tools and technologies such as AWS, Azure, Docker, Kubernetes, scripting languages like Python or Bash, and infrastructure monitoring tools.

Interview Questions

  • What is Terraform and how does it differ from other infrastructure as code tools? (basic)
  • What are the key components of a Terraform configuration? (basic)
  • How do you handle sensitive data in Terraform? (medium)
  • Explain the difference between Terraform plan and apply commands. (medium)
  • How would you troubleshoot issues with a Terraform deployment? (medium)
  • What is the purpose of Terraform state files? (basic)
  • How do you manage Terraform modules in a project? (medium)
  • Explain the concept of Terraform providers. (medium)
  • How would you set up remote state storage in Terraform? (medium)
  • What are the advantages of using Terraform for infrastructure automation? (basic)
  • How does Terraform support infrastructure drift detection? (medium)
  • Explain the role of Terraform workspaces. (medium)
  • How would you handle versioning of Terraform configurations? (medium)
  • Describe a complex Terraform project you have worked on and the challenges you faced. (advanced)
  • How does Terraform ensure idempotence in infrastructure deployments? (medium)
  • What are the key features of Terraform Enterprise? (advanced)
  • How do you integrate Terraform with CI/CD pipelines? (medium)
  • Explain the concept of Terraform backends. (medium)
  • How does Terraform manage dependencies between resources? (medium)
  • What are the best practices for organizing Terraform configurations? (basic)
  • How would you implement infrastructure as code using Terraform for a multi-cloud environment? (advanced)
  • How does Terraform handle rollbacks in case of failed deployments? (medium)
  • Describe a scenario where you had to refactor Terraform code for improved performance. (advanced)
  • How do you ensure security compliance in Terraform configurations? (medium)
  • What are the limitations of Terraform? (basic)

Closing Remark

As you explore opportunities in the Terraform job market in India, remember to continuously upskill, stay updated on industry trends, and practice for interviews to stand out among the competition. With dedication and preparation, you can secure a rewarding career in Terraform and contribute to the growing demand for skilled professionals in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies