Jobs
Interviews

105 Linkerd Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

On-site

Job Requirements We are looking for a seasoned DevOps Architect to lead the design and implementation of automated, scalable, and secure DevOps pipelines and infrastructure. The ideal candidate will bridge development and operations by architecting robust CI/CD processes, ensuring infrastructure as code (IaC) adoption, promoting a culture of automation, and enabling rapid software delivery across cloud and hybrid environments. Key Roles and Responsibilities Design end-to-end DevOps architecture and tooling that supports development, testing, and deployment workflows. Define best practices for source control, build processes, code quality, and artifact repositories. Collaborate with stakeholders to align DevOps initiatives with business and technical goals. Architect and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, Azure DevOps, CircleCI, or ArgoCD. Ensure pipelines are scalable, efficient, and support multi-environment deployments. Integrate automated testing, security scanning, and deployment verifications into pipelines. Lead the implementation of IaC using tools like Terraform, AWS CloudFormation, Azure ARM, or Pulumi. Enforce version-controlled infrastructure and promote immutable infrastructure principles. Manage infrastructure changes through GitOps practices and reviews. Design and support containerized workloads using Docker and orchestration platforms like Kubernetes or OpenShift. Implement Helm charts, Operators, and auto-scaling strategies. Architect cloud-native infrastructure in AWS, Azure, or GCP for microservices applications. Set up observability frameworks using tools like Prometheus, Grafana, ELK/EFK, Splunk, or Datadog. Implement alerting mechanisms and dashboards for system health and performance. Participate in incident response, root cause analysis, and postmortem reviews. Integrate security practices (DevSecOps) into all phases of the delivery pipeline. Enforce policies for secrets management, access controls, and software supply chain integrity. Ensure compliance with regulations like SOC 2, HIPAA, or ISO 27001. Automate repetitive tasks such as provisioning, deployments, and environment setups. Integrate tools across the DevOps lifecycle, including Jira, ServiceNow, SonarQube, Nexus, etc. Promote the use of APIs and scripting to streamline DevOps workflows. Act as a DevOps evangelist, mentoring engineering teams on best practices. Drive adoption of Agile and Lean principles within infrastructure and operations. Facilitate knowledge sharing through documentation, brown-bag sessions, and training. Work Experience Bachelor's/Master’s degree in Computer Science, Engineering, or related discipline. 8+ years of IT experience, with at least 3 in a senior DevOps role. Deep experience with CI/CD tools and DevOps automation frameworks. Proficiency in scripting (Bash, Python, Go, or PowerShell). Hands-on experience with one or more public cloud platforms: AWS, Azure, or GCP. Strong understanding of GitOps, configuration management (Ansible, Chef, Puppet), and observability tools. Experience managing infrastructure and deploying applications in Kubernetes-based environments. Knowledge of software development lifecycle and Agile/Scrum methodologies. Certifications such as: - AWS Certified DevOps Engineer – Professional - Azure DevOps Engineer Expert - Certified Kubernetes Administrator (CKA) or Developer (CKAD) - Terraform Associate Experience implementing FinOps practices and managing cloud cost optimization. Familiarity with service mesh (Istio, Linkerd) and serverless architectures Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Kanpur, Uttar Pradesh, India

On-site

Description We are looking for an experienced DevOps Engineer with a strong background in on-premise infrastructure management and hands-on expertise in Kubernetes, GitLab CI/CD, and Docker. The ideal candidate will be responsible for managing and optimizing infrastructure pipelines, enhancing automation in deployments, and ensuring system reliability, security, and high availability. Responsibilities Design, deploy, and administer on-premises Kubernetes clusters using tools such as kubeadm, k3s, or k0s to support scalable application infrastructure. Configure and maintain robust GitLab CI/CD pipelines to streamline continuous integration, testing, and deployment processes. Develop and maintain Docker-based environments across development, testing, and production stages to ensure consistency and efficiency. Implement and manage infrastructure using Ansible or Terraform, with a focus on tailored solutions for on-premises environments. Architect and maintain secure, resilient infrastructure supporting microservices and internal platforms to ensure high availability and performance. Diagnose and resolve complex environment-related issues, including container orchestration, networking, and deployment failures. Create and maintain detailed internal documentation and operational playbooks to support standardized DevOps practices. Participate in a rotating on-call schedule to provide 24/7 operational support for critical production systems. Identify repetitive tasks and implement automation to minimize manual intervention and improve operational efficiency. Design and manage infrastructure using tools like Terraform and Kubernetes Composite Resource Definitions (XRDs) to support dynamic scaling and management. Implement robust security measures to safeguard infrastructure, applications, and data, ensuring compliance with internal and industry standards. Act as a bridge between development and operations teams to facilitate seamless software releases and quick resolution of production issues. Conduct root cause analysis (RCA) for production incidents and implement measures to prevent recurrence. Continuously monitor and optimize system performance, proactively identifying and resolving bottlenecks and implementing capacity planning strategies. Maintain comprehensive records of systems architecture, configurations, processes, and incident resolutions. Drive ongoing improvements in infrastructure, tools, and processes to enhance system reliability, scalability, and performance. Eligibility Minimum of 3 years of hands-on experience as a DevOps Engineer. Strong expertise in Kubernetes administration for on-premises (non-cloud) environments. Proficient in designing and optimizing GitLab CI/CD pipelines. In-depth understanding of Docker and its container lifecycle management. Solid working knowledge of Linux-based systems, system administration, and core networking concepts. Experience with monitoring and logging tools such as Prometheus, Grafana, or the ELK Stack. Scripting proficiency in Bash, Python, or Go. Desired Eligibility Experience with self-hosted GitLab runners and artifact registries. Knowledge of service mesh architectures (e.g., Istio, Linkerd). Familiarity with security best practices in containerized and orchestrated environments. Exposure to load balancing tools such as HAProxy, Nginx, or Traefik. Travel As and when required, across the country for project execution and monitoring, as well as for coordination with geographically distributed teams. Communication Submit a cover letter summarising your experience in relevant technologies and software, along with a resume and the Latest passport-size photograph. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Matillion is The Data Productivity Cloud. We are on a mission to power the data productivity of our customers and the world, by helping teams get data business ready, faster. Our technology allows customers to load, transform, sync and orchestrate their data. We are looking for passionate, high-integrity individuals to help us scale up our growing business. Together, we can make a dent in the universe bigger than ourselves. With offices in the UK, US and Spain, we are now thrilled to announce the opening of our new office in Hyderabad, India. This marks an exciting milestone in our global expansion, and we are now looking for talented professionals to join us as part of our founding team. About the Role Are you ready to shape the future of reliability at scale? At Matillion, we’re looking for a Principal Engineer - Reliability to lead our cloud architecture and observability strategy across mission-critical systems. This high-impact role puts you at the heart of our cloud-native engineering team, designing resilient distributed systems that power data workloads across the globe. You’ll work cross-functionally with engineering, product, and leadership, helping to scale our platform as we continue our journey of global growth. We value in-person collaboration here at Matillion, therefore this role will work from our central Hyderabad office. What you'll be doing Leading the design and architecture of scalable, cloud-native systems that prioritise reliability and performance Owning observability and infrastructure strategy to ensure global uptime and rapid incident response Driving automation, sustainable incident practices, and blameless postmortems across teams Collaborating with engineering and product to shape scalable solutions from ideation to delivery Coaching and mentoring engineers, fostering a culture of technical excellence and innovation What we are looking for Deep expertise in Kubernetes and modern tooling like Linkerd, ArgoCD, or Traefik Pro-level programming skills (Go, Java or Python preferred) and familiarity with the broader ecosystem Proven experience building large-scale distributed systems in public cloud (AWS or Azure) Hands-on knowledge of observability tools like Prometheus, Grafana, OpenTelemetry, or Datadog Experience with messaging systems (e.g., Kafka) and secrets management (Vault, AWS Secrets Manager) A collaborative leader with strong communication skills and a passion for scalability, availability, and innovation Matillion has fostered a culture that is collaborative, fast-paced, ambitious, and transparent, and an environment where people genuinely care about their colleagues and communities. Our 6 core values guide how we work together and with our customers and partners. We operate a truly flexible and hybrid working culture that promotes work-life balance, and are proud to be able to offer the following benefits: - Company Equity - 27 days paid time off - 12 days of Company Holiday - 5 days paid volunteering leave - Group Mediclaim (GMC) - Enhanced parental leave policies - MacBook Pro - Access to various tools to aid your career development More about Matillion Thousands of enterprises including Cisco, DocuSign, Slack, and TUI trust Matillion technology to load, transform, sync, and orchestrate their data for a wide range of use cases from insights and operational analytics, to data science, machine learning, and AI. With over $300M raised from top Silicon Valley investors, we are on a mission to power the data productivity of our customers and the world. We are passionate about doing things in a smart, considerate way. We’re honoured to be named a great place to work for several years running by multiple industry research firms. We are dual headquartered in Manchester, UK and Denver, Colorado. We are keen to hear from prospective Matillioners, so even if you don’t feel you match all the criteria please apply and a member of our Talent Acquisition team will be in touch. Alternatively, if you are interested in Matillion but don't see a suitable role, please email talent@matillion.com. Matillion is an equal opportunity employer. We celebrate diversity and we are committed to creating an inclusive environment for all of our team. Matillion prohibits discrimination and harassment of any type. Matillion does not discriminate on the basis of race, colour, religion, age, sex, national origin, disability status, genetics, sexual orientation, gender identity or expression, or any other characteristic protected by law. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Summary We are seeking a highly skilled and motivated Security, Compliance, Service Governance, and FinOps Engineer to join our Platform Engineering Team . This role is critical in ensuring that our developer platform adheres to security, compliance, and governance standards for Europe and North America while also managing FinOps practices to optimize cloud cost efficiency. The personnel will work closely with engineering teams to implement security best practices, ensure regulatory compliance, enforce service governance policies, and drive cost optimization. Key Responsibilities Security & Compliance: Implement security best practices within the platform, ensuring alignment with industry standards (ISO 27001, SOC 2, NIST, etc.). Enforce regulatory compliance with GDPR, CCPA, and other region-specific privacy regulations. Conduct risk assessments and vulnerability management within the platform. Collaborate with security teams to design and integrate zero-trust architectures and IAM policies. Service Governance: Define and enforce governance policies for service publishing and consumption. Ensure API and microservices security compliance (OAuth, OpenID Connect, API gateways). Monitor service reliability, availability, and SLA compliance. FinOps & Cloud Cost Optimization: Develop and implement FinOps strategies to optimize cloud usage and reduce costs. Monitor and analyze cloud expenditures to provide insights and recommendations for cost savings. Collaborate with finance and engineering teams to establish budget controls and forecasting for cloud resources. Implement automation for cost management, including auto-scaling, resource tagging, and anomaly detection. Automation & Monitoring: Automate compliance and governance checks using tools like OPA, Terraform, Kubernetes policies (Kyverno, Gatekeeper), and CI/CD security scanning tools. Implement observability tools for audit logging, security monitoring, and anomaly detection. Collaboration & Stakeholder Engagement: Work closely with engineering, DevOps, and security teams to embed compliance into the software development lifecycle. Provide training and best practice guidelines to developers on security, governance, and FinOps. Required Skills & Qualifications 5+ years of experience in security, compliance, governance, or FinOps within a cloud-based platform environment. Strong understanding of cloud security principles (AWS, Azure, or GCP). Hands-on experience with CI/CD security tools (e.g., Snyk, SonarQube, Aqua Security, Prisma Cloud). Proficiency in infrastructure-as-code (IaC) (Terraform, CloudFormation) and security automation. Familiarity with Kubernetes security (Pod Security Policies, RBAC, network policies). Knowledge of regulatory compliance standards (GDPR, SOC 2, ISO 27001, NIST 800-53). Experience with IAM, RBAC, and policy-based security controls. Strong scripting skills (Python, Bash, or similar) for automation. Experience with FinOps tools (AWS Cost Explorer, Azure Cost Management, GCP Cost Analysis) and cloud financial management best practices. Excellent problem-solving and communication skills. Desired Skills & Qualifications Certifications such as CISSP, CISM, AWS Security Specialty, CKS, or FinOps Certified Practitioner. Experience with service mesh technologies (Istio, Linkerd) for governance. Exposure to DevSecOps methodologies and security-as-code principles. Prior experience working in regulated industries (finance, healthcare, etc.). Justification for the Role Ensuring Compliance: With evolving privacy laws (GDPR, CCPA, etc.) in Europe and North America, a dedicated role is essential to maintain compliance. Security Risk Mitigation: As the platform scales, ensuring secure CI/CD pipelines and service publishing reduces vulnerabilities. Service Governance: Standardized governance enhances interoperability, security, and reliability of published services. FinOps Efficiency: Optimizing cloud costs and ensuring financial governance is crucial to managing infrastructure expenditures effectively. Developer Enablement: Providing automated security, compliance, and cost governance frameworks allows developers to focus on innovation while adhering to best practices. About Trimble Trimble is a leading provider of advanced positioning solutions that maximize productivity and enhance profitability for our customers. We are an exciting, entrepreneurial company, with a history of exceptional growth coupled with a disciplined and strategic focus on being the best. While GPS is at our core, we have grown beyond this technology to embrace other sophisticated positioning technologies and, in doing so, we are changing the way the world works. Those who successfully lead others to meet our objectives are vital to our organization. Leadership at Trimble is much more than simply exercising assigned authority; we expect our leaders to embrace a mission-focused leadership style, demonstrating the strength of character, intellect and the ability to convert ideas to reality. www.trimble.com Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

Kong Mesh Engineers Work Location : WFH (Preferred locations - Bengaluru, Chennai, Hyderabad) - They can work from home but should be from these locations only, same location as if required should be available to come to office. About The Role We are looking for a skilled and proactive Kong Mesh Engineer to join our Platform Engineering team. In this role, you will help design, implement, and manage our Kong Mesh (based on Kuma) service mesh infrastructure, supporting secure, scalable, and observable communication between microservices. You'll work closely with DevOps, infrastructure, and application teams to enable service discovery, zero-trust security, and traffic control across distributed systems. Key Responsibilities Implement and manage Kong Mesh to support service-to-service communication within a Kubernetes and/or hybrid environment. Configure and maintain features such as mTLS, traffic routing, circuit breakers, rate limiting, and observability. Collaborate with developers and SREs to onboard applications onto the service mesh. Automate service mesh deployments and configurations using CI/CD pipelines and Infrastructure as Code (IaC) tools. Monitor mesh performance, troubleshoot issues, and tune configurations for optimal performance. Ensure secure and consistent communication between microservices using mesh policies. Document configurations, workflows, and best practices for team and organizational use. Requirements Must-Have: Bachelor's degree in Computer Science, Engineering, or a related field. 3+ years of experience in DevOps, SRE, or platform/infrastructure roles. Hands-on experience with Kong Mesh (or other service mesh solutions like Istio, Linkerd, Kuma). Solid understanding of Kubernetes, Envoy proxy, and container orchestration concepts. Experience with microservices architecture and network security (mTLS, RBAC). Familiarity with CI/CD pipelines, Git, and Infrastructure as Code tools like Terraform or Helm. Proficient in scripting languages (e.g., Bash, Python, or Go). Skills: communication,rbac,kong mesh,mtls,microservices,microservices architecture,python,ci,go,helm,ci/cd pipelines,infrastructure,kubernetes,bash,git,terraform,infrastructure as code,mesh Show more Show less

Posted 1 month ago

Apply

0.0 - 6.0 years

0 Lacs

Technopark, Thiruvananthapuram, Kerala

On-site

DevOps Tech Lead (Delivery) About Us We are a Platform Engineering company specializing in DevOps, CloudOps, and AIOps , working across various cloud platforms and services . Our team designs and delivers scalable, automated, and resilient cloud-native solutions for enterprises globally. As we continue to expand, we are looking for a Tech Lead to drive delivery excellence in our client projects. The ideal candidate will have strong technical expertise, leadership capabilities, and hands-on experience in DevOps, CI/CD, Infrastructure as Code (IaC), Kubernetes, observability, and cloud automation. Key Responsibilities Technical Leadership: Lead the delivery team, providing guidance on DevOps, CloudOps, and AIOps best practices. Solution Architecture: Design, implement, and optimize cloud-native architectures on AWS, Azure, or GCP. CI/CD & Automation: Oversee and enhance CI/CD pipelines, ensuring seamless deployment and rollback mechanisms. Infrastructure as Code (IaC): Implement and maintain automated infrastructure provisioning using Terraform, Ansible, or CloudFormation. Kubernetes & Containerization: Manage Kubernetes clusters, containerized workloads, and microservices architectures. Observability & AIOps: Integrate monitoring, logging, and observability tools (e.g., Prometheus, Grafana, ELK, Datadog) and implement AIOps-driven solutions for anomaly detection. Cloud Security & Compliance: Ensure security best practices, IAM policies, and compliance standards are met. Customer Engagement: Work closely with clients to understand requirements, propose solutions, and lead technical discussions. Team Mentorship: Coach and upskill junior engineers, ensuring delivery quality and efficiency. Key Skills & Experience Required Technical Expertise: Deep understanding of DevOps, CloudOps, and AIOps concepts. Strong experience in AWS, Azure, or GCP cloud services. Hands-on expertise with Kubernetes, Docker, Helm, and service mesh (Istio/Linkerd) . Experience in Terraform, Ansible, or CloudFormation for Infrastructure as Code (IaC). Proficiency in CI/CD tools (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Spinnaker). Strong scripting skills in Python, Bash, or Golang for automation. Observability & Security: Experience with monitoring and logging tools like Prometheus, Grafana, ELK, Datadog, New Relic. Knowledge of security best practices, IAM, RBAC, and cloud governance . Leadership & Delivery: Experience leading delivery teams and working in agile environments. Strong problem-solving, communication, and client-handling skills. Ability to work in a fast-paced, highly technical environment with multiple projects. Preferred Qualifications Certifications: AWS/GCP/Azure Certified Solutions Architect, CKAD/CKA, Terraform Associate . Experience with multi-cloud or hybrid cloud environments. Exposure to serverless computing and cloud-native patterns. Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,600,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Supplemental Pay: Performance bonus Ability to commute/relocate: Technopark, Thiruvananthapuram, Kerala: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you an Immediate Joiner? Education: Bachelor's (Required) Experience: DevOps: 6 years (Required) Language: English (Required) Work Location: In person Expected Start Date: 02/07/2025

Posted 1 month ago

Apply

10.0 years

4 - 6 Lacs

Hyderābād

On-site

About the Role: Grade Level (for internal use): 11 About the Role We are looking for a highly driven Senior Platform & Full Stack Engineer who brings passion, innovation, and deep technical experience to join our high-performing DevOps and SRE team. In this role, you’ll help us define, build, and scale the next generation of cloud-native, cloud-agnostic CI/CD pipelines , Infrastructure as Code (IaC) reusable workflows , and AI-driven autonomous deployments . Key Responsibilities Lead the design and implementation of reusable IaC workflows and standardized CI/CD blueprints across multiple teams. Architect and maintain cloud-agnostic deployment solutions with deep expertise in AWS and Kubernetes (EKS). Implement and optimize configuration as code practices using tools like Terraform and GitHub Actions. Partner with developers and SREs to define end-to-end infrastructure workflows — covering compute, network, and storage automation. Contribute as a hands-on developer to internal tools, platforms, and APIs (Java, Go, or similar). Collaborate on cutting-edge initiatives such as Agentic AI workflows and autonomous chat-based deployments using MCP and LLM orchestration. Foster a culture of continuous innovation, high energy, and performance excellence. Required Skills & Experience 10+ years of experience in DevOps, Platform Engineering, or Full Stack Development with platform ownership. Proven experience designing Infrastructure as Code using Terraform at scale. Solid programming skills — J ava ,Python, Javascript and Go preferred Expertise in CI/CD pipeline design and orchestration using GitHub Actions (and optionally ArgoCD, GitLab, Jenkins, etc.). Strong knowledge of AWS services, with hands-on experience in EKS , IAM, networking (VPCs, Route53, ALBs), storage (EBS, S3), and compute. End-to-end understanding of modern cloud infrastructure , DevSecOps, observability, and release practices. Ability to translate product/platform needs into reliable, secure, scalable infrastructure solutions . Excellent problem-solving skills and a mindset for performance, scalability, and resilience. Passion for innovation, high energy, and eagerness to experiment with emerging tech like LLMs and Agentic AI Additional Skills Experience with multi-cloud environments (Azure, GCP). Knowledge of Agentic AI systems , LLMs , or AI Ops use cases. Exposure to platform-as-product or internal developer platforms. Familiarity with Kubernetes Operators, Helm charts, and service mesh (Istio, Linkerd). Why Join Us? Be part of a forward-thinking DevOps and SRE team pushing the boundaries of platform automation. Work on AI-powered workflows and define how infrastructure can be deployed through intelligent assistants. Build developer-centric platforms that make a real impact on engineering productivity and product reliability. Enjoy a culture of innovation, energy, and excellence where your ideas will be heard and executed. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 317047 Posted On: 2025-06-10 Location: Hyderabad, Telangana, India

Posted 1 month ago

Apply

15.0 years

0 Lacs

Thane, Maharashtra, India

On-site

Hiring a Senior DevOps Leader for a High-Scale, Multi-Cloud Environment Finding the right Senior DevOps Leader for your organization, especially one with over 15 years of experience and a background in high-scale operations leveraging GitLab, Kubernetes, GCP, and AWS, is a critical undertaking. This role demands a unique blend of deep technical expertise, strategic thinking, and proven leadership capabilities. Here’s a comprehensive guide to what you should be looking for: Key Responsibilities to Expect: A Senior DevOps Leader in this context will be responsible for more than just managing infrastructure; they will be a strategic partner driving efficiency, innovation, and reliability across the organization. Strategic Leadership & Vision: Defining and executing a long-term DevOps strategy aligned with business objectives, particularly for high-scale and resilient systems. Driving the adoption of DevOps best practices, tools, and culture across engineering and operations teams. Leading architectural decisions for CI/CD, containerization, cloud infrastructure, and automation, ensuring scalability, security, and cost-effectiveness. Evaluating and integrating new and emerging technologies (e.g., AI in DevOps, advanced monitoring solutions) to enhance operational efficiency and system performance. Team Leadership & Development: Building, mentoring, and leading a high-performing team of DevOps engineers. Fostering a collaborative, innovative, and continuous improvement culture within the DevOps team and its interactions with other departments. Managing resource allocation, project prioritization, and performance management for the DevOps team. Technical Oversight & Execution: Overseeing the design, implementation, and management of robust CI/CD pipelines using GitLab CI. Leading the strategy and governance for Kubernetes deployments at scale, including cluster management, networking, security, and resource optimization across GCP (GKE) and AWS (EKS). Architecting and managing multi-cloud infrastructure (GCP and AWS), focusing on high availability, disaster recovery, security, and cost optimization. Championing Infrastructure as Code (IaC) practices using tools like Terraform or CloudFormation. Implementing and refining comprehensive monitoring, logging, and alerting strategies (e.g., using Prometheus, Grafana, ELK Stack, CloudWatch, Google Cloud's operations suite) to ensure system health and proactive issue resolution. Driving automation initiatives across all stages of the software development lifecycle. Collaboration & Communication: Working closely with development, operations, security, and product teams to streamline workflows and ensure seamless delivery of software. Communicating effectively with executive leadership, stakeholders, and technical teams regarding DevOps strategy, project status, risks, and performance metrics. Championing and enforcing security best practices (DevSecOps) throughout the development lifecycle. Operational Excellence & Governance: Establishing and tracking key DevOps metrics (e.g., deployment frequency, lead time for changes, mean time to recovery (MTTR), change failure rate). Ensuring compliance with industry standards and internal policies. Managing budgets and vendor relationships related to DevOps tools and cloud services. Essential Technical Leadership Skills: Beyond hands-on proficiency, a leader must demonstrate strategic application and governance of these technologies. GitLab: Strategic Implementation: Deep understanding of GitLab's full suite (beyond just CI/CD) for source code management, pipeline orchestration, security scanning, and package management in a large enterprise. Scalability & Performance: Experience in scaling GitLab infrastructure and optimizing its performance for a large number of users and projects. Automation & Integration: Proven ability to automate complex workflows and integrate GitLab with other development and operations tools. Kubernetes (K8s): Large-Scale Cluster Management: Expertise in designing, deploying, and managing multiple large-scale Kubernetes clusters on both GCP (GKE) and AWS (EKS). This includes experience with cluster upgrades, multi-tenancy, and resource quotas. Advanced Networking & Security: In-depth knowledge of Kubernetes networking (e.g., CNI, service mesh like Istio or Linkerd) and security best practices (e.g., pod security policies, network policies, secrets management, RBAC) in a high-scale, multi-cloud environment. Ecosystem & Tooling: Familiarity with the broader Kubernetes ecosystem, including Helm for package management, Prometheus/Grafana for monitoring, and tools for logging and tracing. GitOps: Experience implementing GitOps principles for managing Kubernetes configurations and applications. Google Cloud Platform (GCP) & Amazon Web Services (AWS): Multi-Cloud Strategy & Governance: Proven experience in developing and implementing multi-cloud strategies, including workload placement, data management, and consistent governance across GCP and AWS. Core Services Expertise: Deep understanding and experience with core compute, storage, networking, database, and security services on both platforms (e.g., AWS EC2, S3, VPC, RDS; GCP Compute Engine, Cloud Storage, VPC, Cloud SQL). Infrastructure as Code (IaC): Mastery of IaC tools like Terraform (preferred for multi-cloud) or CloudFormation (AWS-specific) for provisioning and managing infrastructure in both clouds. Cost Optimization & Management: Demonstrable experience in implementing cost optimization strategies and managing budgets effectively across both GCP and AWS at scale. Security & Compliance: Expertise in designing and implementing secure cloud architectures, adhering to compliance standards (e.g., SOC 2, ISO 27001, HIPAA if applicable) on both platforms. Migration Experience: Experience leading large-scale migrations to or between cloud platforms is highly desirable. General DevOps & SRE Principles: Automation: A strong automation mindset with proficiency in scripting languages (e.g., Python, Bash, PowerShell). Monitoring, Logging, and Observability: Experience designing and implementing comprehensive observability solutions for large-scale distributed systems. Site Reliability Engineering (SRE): Understanding and application of SRE principles for availability, reliability, performance, and incident response. DevSecOps: Proven ability to integrate security into all phases of the DevOps lifecycle. Why Netcore? Being first is in our nature. Netcore Cloud is the first and leading AI/ML-powered customer engagement and experience platform (CEE) that helps B2C brands increase engagement, conversions, revenue, and retention. Our cutting-edge SaaS products enable personalized engagement across the entire customer journey and build amazing digital experiences for businesses of all sizes. Netcore’s Engineering team focuses on adoption, scalability, complex challenges, and fastest processing. We use versatile tech stacks like streaming technologies and queue management systems such as Kafka , Storm , RabbitMQ , Celery , and RedisQ . Netcore strikes a perfect balance between experience and agility. We currently work with 5000+ enterprise brands across 18 countries , serving over 70% of India’s Unicorns , positioning us among the top-rated customer engagement & experience platforms. Headquartered in Mumbai, we have a global footprint across 10 countries , including the United States and Germany . Being certified as a Great Place to Work for three consecutive years reinforces Netcore’s principle of being a people-centric company — where you're not just an employee but part of a family. A career at Netcore is more than just a job — it’s an opportunity to shape the future. Learn more at netcorecloud.com . �� What’s in it for You? Immense growth and continuous learning. Solve complex engineering problems at scale. Work with top industry talent and global brands. An open, entrepreneurial culture that values innovation. Show more Show less

Posted 1 month ago

Apply

7.0 - 12.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Role Description Work Location : Thiruvananthapuram & Kochi Role Description We are seeking a skilled Senior DevOps Engineer with 7 to 12 years of experience to join our team in Trivandrum. The ideal candidate will possess deep expertise in container orchestration, scripting, automation, and monitoring. You will play a critical role in designing, implementing, and optimizing CI/CD pipelines, maintaining infrastructure, and ensuring system scalability and reliability. Responsibilities Design, deploy, and maintain scalable and resilient containerized environments using Docker and Kubernetes. Develop and maintain Python scripts for automation, monitoring, and deployment tasks. Set up, manage, and optimize CI/CD pipelines using Git and related tools. Automate infrastructure provisioning and configuration using Ansible. Collaborate with cross-functional teams to identify and resolve issues, ensuring optimal system performance. Utilize Splunk for log analysis and monitoring, and create dashboards in Grafana for system health visualization. Write and maintain infrastructure documentation, technical guidelines, and best practices. Implement and maintain security practices in the DevOps pipeline and across the infrastructure. Troubleshoot complex system and deployment issues to ensure continuous operations. Mentor junior team members and drive continuous improvement initiatives. Mandatory Skills Strong hands-on experience with Docker and Kubernetes. Proficiency in Python scripting for automation and system management. Solid knowledge of Git and experience in setting up and managing deployment pipelines. Working knowledge of Ansible for configuration management and automation. Experience with monitoring tools, specifically Splunk (for log management) and Grafana (for creating dashboards). Excellent troubleshooting skills across complex distributed systems. Strong communication skills, with the ability to convey technical concepts clearly and effectively. Good To Have Skills Basic knowledge of Golang. Exposure to cloud platforms (AWS, Azure, or GCP). Experience with infrastructure-as-code (IaC) tools like Terraform. Knowledge of service mesh technologies (e.g., Istio, Linkerd). Experience in security practices within DevOps workflows (DevSecOps). Soft Skills Strong analytical and problem-solving abilities. Ability to work both independently and collaboratively in a fast-paced environment. Excellent verbal and written communication skills. Proactive mindset with a focus on continuous improvement. Strong organizational skills and attention to detail. Skills Python,Dockers,Kubernatives,Continuous Integration Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Description: DevOps Engineer (Onsite – Mumbai) Location: Onsite – Mumbai, India Experience: 3+ years About the Role: We are looking for a skilled and proactive DevOps Engineer with 3+ years of hands-on experience to join our engineering team onsite in Mumbai . The ideal candidate will have a strong background in CI/CD pipelines , cloud platforms (AWS, Azure, or GCP), infrastructure as code , and containerization technologies like Docker and Kubernetes. This role involves working closely with development, QA, and operations teams to automate, optimize, and scale our infrastructure. Key Responsibilities: Design, implement, and maintain CI/CD pipelines for efficient and reliable deployment processes Manage and monitor cloud infrastructure (preferably AWS, Azure, or GCP) Build and manage Docker containers , and orchestrate with Kubernetes or similar tools Implement and manage Infrastructure as Code using tools like Terraform , CloudFormation , or Ansible Automate configuration management and system provisioning tasks Monitor system health and performance using tools like Prometheus , Grafana , ELK , etc. Ensure system security through best practices and proactive monitoring Collaborate with developers to ensure smooth integration and deployment Must-Have Skills: 3+ years of DevOps or SRE experience in a production environment Experience with cloud services (AWS, GCP, Azure) Strong knowledge of CI/CD tools such as Jenkins, GitLab CI, CircleCI, or similar Proficiency with Docker and container orchestration (Kubernetes preferred) Hands-on with Terraform , Ansible , or other infrastructure-as-code tools Good understanding of Linux/Unix system administration Familiar with version control systems (Git) and branching strategies Knowledge of scripting languages (Bash, Python, or Go) Good-to-Have (Optional): Exposure to monitoring/logging stacks: ELK, Prometheus, Grafana Experience in securing cloud environments Knowledge of Agile and DevOps culture Understanding of microservices and service mesh tools (Istio, Linkerd) Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

We are seeking a Senior Full Stack Engineer with deep expertise in modern JavaScript ecosystems and cloud architecture. You'll be working on complex application modernization initiatives, focusing on transforming legacy systems into scalable, cloud-native applications. Core Technical Stack Frontend : React.js (with Hooks, Context API), Next.js 14+, Redux/RTK, TypeScript, Tailwind CSS, Material-UI/Chakra UI Backend : Node.js, NestJS, Express.js, GraphQL (Apollo Server), WebSocket Cloud & Infrastructure AWS Services : ECS, Lambda, API Gateway, S3, CloudFront, RDS, DynamoDB, SQS/SNS, ElastiCache Infrastructure as Code : Terraform, CloudFormation Containerization : Docker, Kubernetes, ECS Databases & Caching MongoDB PostgreSQL Redis Elasticsearch Authentication & Security : OAuth2.0/OIDC JWT AWS Cognito SAML2.0 Testing & Quality : Jest React Testing Library Cypress CI/CD & Monitoring GitHub Actions Jenkins AWS CloudWatch DataDog Key Technical Responsibilities System Architecture & Development (70%) : Design and implement microservices architectures using Node.js/NestJS, focusing on scalability and performance Build reusable component libraries and establish frontend architecture patterns using React.js and Next.js Implement real-time features using WebSocket/Socket.io for live data updates and notifications Design and optimize database schemas, write complex queries, and implement caching strategies Develop CI/CD pipelines with automated testing, deployment, and monitoring Create and maintain infrastructure as code using Implement security best practices and compliance requirements (SOC2, GDPR) Examples Of Current Projects Modernizing a monolithic PHP application into microservices using NestJS and React Implementing event-driven architecture using AWS EventBridge and SQS Building a real-time analytics dashboard using WebSocket and Time-series databases Optimizing application performance through caching strategies and CDN implementation Developing custom hooks and components for shared functionality across applications Technical Leadership (30%) : Conduct code reviews and provide technical mentorship Contribute to technical decision-making and architecture discussions Document technical designs and maintain development standards Collaborate with product teams to define technical requirements Guide junior developers through complex technical challenges Required Technical Experience Expert-level proficiency in JavaScript/TypeScript and full-stack development Deep understanding of React.js internals, hooks, and performance optimization Extensive experience with Node.js backend development and microservices Strong background in cloud architecture and AWS services Hands-on experience with container orchestration and infrastructure automation Proven track record of implementing authentication and authorization systems Experience with monitoring, logging, and observability tools Preferred Qualifications Technical Expertise : Advanced degree in Computer Science, Engineering, or related field Experience with cloud-native development and distributed systems patterns Proficiency in additional programming languages (Rust, Go, Python) Deep understanding of browser internals and web performance optimization Experience with streaming data processing and real-time analytics Architecture & System Design Experience designing event-driven architectures at scale Knowledge of DDD (Domain-Driven Design) principles Background in implementing CQRS and Event Sourcing patterns Experience with high-throughput, low-latency systems Understanding of distributed caching strategies and implementation Cloud & DevOps AWS Professional certifications (Solutions Architect, DevOps) Experience with multi-region deployments and disaster recovery Knowledge of service mesh implementations (Istio, Linkerd) Familiarity with GitOps practices and tools (ArgoCD, Flux) Experience with chaos engineering practices Security & Compliance Understanding of OWASP security principles Experience with PCI-DSS compliance requirements Knowledge of cryptography and secure communication protocols Background in implementing Zero Trust architectures Experience with security automation and DevSecOps practices Development & Testing Experience with TDD/BDD methodologies Knowledge of performance testing tools (k6, JMeter) Background in implementing continuous testing strategies Experience with contract testing (Pact, Spring Cloud Contract) Familiarity with mutation testing concepts About Us TechAhead is a global digital transformation company with a strong presence in the USA and India. We specialize in AI-first product design thinking and bespoke development solutions. With over 15 years of proven expertise, we have partnered with Fortune 500 companies and leading global brands to drive digital innovation and deliver excellence. At TechAhead, we are committed to continuous learning, growth and crafting tailored solutions that meet the unique needs of our clients. Join us to shape the future of digital innovation worldwide and drive impactful results with cutting-edge AI tools and strategies! (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Andaman and Nicobar Islands, India

On-site

Rockwell Automation is a global technology leader focused on helping the world’s manufacturers be more productive, sustainable, and agile. With more than 28,000 employees who make the world better every day, we know we have something special. Behind our customers - amazing companies that help feed the world, provide life-saving medicine on a global scale, and focus on clean water and green mobility - our people are energized problem solvers that take pride in how the work we do changes the world for the better. We welcome all makers, forward thinkers, and problem solvers who are looking for a place to do their best work. And if that’s you we would love to have you join us! Summary Job Description We're looking for a Senior Cloud Reliability Engineer to reimagine how we deliver secure, enterprise-ready CIAM solutions. You'll shape the foundation of our commercial SaaS platform by designing infrastructure that transcends cloud boundaries while maintaining the thoughtfulness of a security-first mindset. This role is for engineers who see vendor lock-in as an architectural challenge to solve, not an inevitability. You'll lead the complete lifecycle of our deployment systems – from creating cloud-agnostic pipelines to implementing tenant isolation patterns worthy of a Zero Trust product. Your Responsibilities Liberate our platform from cloud-specific dependencies without sacrificing velocity Embed security observability into every layer of the deployment process Empower developers to ship multi-cloud compatible services by default Transform internal tools into marketable SaaS capabilities You will report to Manager The Essentials - You Will Have Systems thinkers who prototype their way to elegant abstractions Engineers who document through code-fundamentals Architect who treat compliance as a feature, not a constraint The Preferred - You Might Also Have Pipeline Ownership Tenant Operations ✅ Proven success in:✅ Deep expertise with: Eliminate Cloud Lock-inRedesign current Azure App Services implementation into vendor-agnostic architectureImplement CNCF standards for portable multi-cloud deployments (Crossplane, KCP, Cluster API)Future-Proof CI/CD FoundationBuild Dagger-based pipelines with OCI artifact compatibilityDesign self-contained workflows executable on any compute (GitHub Actions/Azure DevOps/GitLab)Tenant-Aware Platform EngineeringCreate isolated tenant provisioning system using Cloud Native Building BlocksImplement zero-trust networking for multi-customer deploymentsKey ResponsibilitiesCloud Re-architecture Replace app service dependencies with Kubernetes operators Design using cloud-portable services: Backstage for developer portal OpenFGA for relationship-based access control NATS/JetStream for cloud-agnostic messaging Convert existing workflows to Dagger modules with CUE unification Create pipeline execution environments supporting: Air-gapped deployments Hybrid cloud testing matrices WASM-based testing tools Build multi-tenant control plane with: Capsule for namespace isolation Vcluster for tenant-specific control planes Paralus for zero-trust access 5+ years production experience with: Multi-cloud Kubernetes (AKS/EKS/GKE + Rancher/KubeSpray) Pipeline abstraction tools (Dagger, Earthly, Tekton) Secret zero patterns (SPIRE/SPIFFE/Vault) Migrating vendor-locked systems to OSS equivalents Building commercial SaaS from internal platforms Implementing policy-as-code for tenant isolation Service meshes (Linkerd/Istio) for multi-tenant networking Cloud cost attribution across tenants OCI-compliant artifact registries What We Offer Our benefits package includes … Comprehensive mindfulness programs with a premium membership to Calm Volunteer Paid Time off available after 6 months of employment for eligible employees Company volunteer and donation matching program – Your volunteer hours or personal cash donations to an eligible charity can be matched with a charitable donation. Employee Assistance Program Personalized wellbeing programs through our OnTrack program On-demand digital course library for professional development and other local benefits! At Rockwell Automation we are dedicated to building a diverse, inclusive and authentic workplace, so if you're excited about this role but your experience doesn't align perfectly with every qualification in the job description, we encourage you to apply anyway. You may be just the right person for this or other roles. Rockwell Automation’s hybrid policy aligns that employees are expected to work at a Rockwell location at least Mondays, Tuesdays, and Thursdays unless they have a business obligation out of the office. Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Tirunelveli, Tamil Nadu, India

On-site

Company Description Pro17Analytics is a specialist IT solution company with esteemed global clientele. They focus on customer-centric services and solutions, leveraging deep IT consulting expertise, domain knowledge, and best practices to help companies achieve their business objectives. The company is backed by a strong techno-functional management team and talented IT consultants, providing high-end consulting services to global clients across industries. Job Title: DevOps Engineer Experience: 4–6 Years Location: Tirunelveli Employment Type: Full-Time Job Summary: We are seeking an experienced and proactive DevOps Engineer with 4 to 6 years of hands-on experience to join our growing team. The ideal candidate will have a strong background in automation, CI/CD pipeline management, cloud platforms, containerization, and infrastructure as code (IaC). You will play a key role in ensuring the availability, performance, and security of our development and production environments. Key Responsibilities: · Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI/CD, Azure DevOps, or similar. · Manage and automate infrastructure provisioning and configuration using tools like Terraform, Ansible, or CloudFormation. · Monitor system performance, availability, and scalability through logging and monitoring tools (e.g., Prometheus, Grafana, ELK, CloudWatch). · Collaborate with development, QA, and operations teams to ensure seamless code deployments and environment stability. · Implement and manage containerization using Docker and orchestration platforms like Kubernetes or ECS. · Manage cloud infrastructure on AWS, Azure, or GCP. · Ensure security, compliance, and best practices across DevOps processes. · Troubleshoot and resolve issues in development, test, and production environments. · Participate in on-call rotation for production support and incident management. Required Skills: · CI/CD Tools: Jenkins, GitLab CI/CD, Azure DevOps, or similar. · Containerization: Docker, Kubernetes, Helm. · Cloud Platforms: AWS, Azure, or GCP (at least one is mandatory). · Infrastructure as Code (IaC): Terraform, CloudFormation, or Ansible. · Scripting Languages: Shell, Python, Bash, or PowerShell. · Monitoring & Logging: Prometheus, Grafana, ELK Stack, Datadog, or Cloud-native tools. · Version Control Systems: Git, GitHub, GitLab. · Security Practices: Understanding of DevSecOps, vulnerability scanning, secrets management. Preferred Qualifications: · Experience with service mesh (e.g., Istio, Linkerd). · Exposure to serverless architectures and microservices deployment. · Knowledge of networking concepts and performance tuning. · Certification in AWS/Azure/GCP DevOps or related fields is a plus. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana

On-site

About the Role: Grade Level (for internal use): 11 About the Role We are looking for a highly driven Senior Platform & Full Stack Engineer who brings passion, innovation, and deep technical experience to join our high-performing DevOps and SRE team. In this role, you’ll help us define, build, and scale the next generation of cloud-native, cloud-agnostic CI/CD pipelines , Infrastructure as Code (IaC) reusable workflows , and AI-driven autonomous deployments . Key Responsibilities Lead the design and implementation of reusable IaC workflows and standardized CI/CD blueprints across multiple teams. Architect and maintain cloud-agnostic deployment solutions with deep expertise in AWS and Kubernetes (EKS). Implement and optimize configuration as code practices using tools like Terraform and GitHub Actions. Partner with developers and SREs to define end-to-end infrastructure workflows — covering compute, network, and storage automation. Contribute as a hands-on developer to internal tools, platforms, and APIs (Java, Go, or similar). Collaborate on cutting-edge initiatives such as Agentic AI workflows and autonomous chat-based deployments using MCP and LLM orchestration. Foster a culture of continuous innovation, high energy, and performance excellence. Required Skills & Experience 10+ years of experience in DevOps, Platform Engineering, or Full Stack Development with platform ownership. Proven experience designing Infrastructure as Code using Terraform at scale. Solid programming skills — J ava ,Python, Javascript and Go preferred Expertise in CI/CD pipeline design and orchestration using GitHub Actions (and optionally ArgoCD, GitLab, Jenkins, etc.). Strong knowledge of AWS services, with hands-on experience in EKS , IAM, networking (VPCs, Route53, ALBs), storage (EBS, S3), and compute. End-to-end understanding of modern cloud infrastructure , DevSecOps, observability, and release practices. Ability to translate product/platform needs into reliable, secure, scalable infrastructure solutions . Excellent problem-solving skills and a mindset for performance, scalability, and resilience. Passion for innovation, high energy, and eagerness to experiment with emerging tech like LLMs and Agentic AI Additional Skills Experience with multi-cloud environments (Azure, GCP). Knowledge of Agentic AI systems , LLMs , or AI Ops use cases. Exposure to platform-as-product or internal developer platforms. Familiarity with Kubernetes Operators, Helm charts, and service mesh (Istio, Linkerd). Why Join Us? Be part of a forward-thinking DevOps and SRE team pushing the boundaries of platform automation. Work on AI-powered workflows and define how infrastructure can be deployed through intelligent assistants. Build developer-centric platforms that make a real impact on engineering productivity and product reliability. Enjoy a culture of innovation, energy, and excellence where your ideas will be heard and executed. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 317047 Posted On: 2025-06-10 Location: Hyderabad, Telangana, India

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana

On-site

Lead Platform Engineer, DevOps Hyderabad, India; Bangalore, India Information Technology 317047 Job Description About The Role: Grade Level (for internal use): 11 About The Role We are looking for a highly driven Senior Platform & Full Stack Engineer who brings passion, innovation, and deep technical experience to join our high-performing DevOps and SRE team. In this role, you’ll help us define, build, and scale the next generation of cloud-native, cloud-agnostic CI/CD pipelines , Infrastructure as Code (IaC) reusable workflows , and AI-driven autonomous deployments . Key Responsibilities Lead the design and implementation of reusable IaC workflows and standardized CI/CD blueprints across multiple teams. Architect and maintain cloud-agnostic deployment solutions with deep expertise in AWS and Kubernetes (EKS). Implement and optimize configuration as code practices using tools like Terraform and GitHub Actions. Partner with developers and SREs to define end-to-end infrastructure workflows — covering compute, network, and storage automation. Contribute as a hands-on developer to internal tools, platforms, and APIs (Java, Go, or similar). Collaborate on cutting-edge initiatives such as Agentic AI workflows and autonomous chat-based deployments using MCP and LLM orchestration. Foster a culture of continuous innovation, high energy, and performance excellence. Required Skills & Experience 10+ years of experience in DevOps, Platform Engineering, or Full Stack Development with platform ownership. Proven experience designing Infrastructure as Code using Terraform at scale. Solid programming skills — J ava ,Python, Javascript and Go preferred Expertise in CI/CD pipeline design and orchestration using GitHub Actions (and optionally ArgoCD, GitLab, Jenkins, etc.). Strong knowledge of AWS services, with hands-on experience in EKS , IAM, networking (VPCs, Route53, ALBs), storage (EBS, S3), and compute. End-to-end understanding of modern cloud infrastructure , DevSecOps, observability, and release practices. Ability to translate product/platform needs into reliable, secure, scalable infrastructure solutions . Excellent problem-solving skills and a mindset for performance, scalability, and resilience. Passion for innovation, high energy, and eagerness to experiment with emerging tech like LLMs and Agentic AI Additional Skills Experience with multi-cloud environments (Azure, GCP). Knowledge of Agentic AI systems , LLMs , or AI Ops use cases. Exposure to platform-as-product or internal developer platforms. Familiarity with Kubernetes Operators, Helm charts, and service mesh (Istio, Linkerd). Why Join Us? Be part of a forward-thinking DevOps and SRE team pushing the boundaries of platform automation. Work on AI-powered workflows and define how infrastructure can be deployed through intelligent assistants. Build developer-centric platforms that make a real impact on engineering productivity and product reliability. Enjoy a culture of innovation, energy, and excellence where your ideas will be heard and executed. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - IFTECH202.2 - Middle Professional Tier II (EEO Job Group) Job ID: 317047 Posted On: 2025-06-10 Location: Hyderabad, Telangana, India

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

Job Title: Senior DevOps Engineer – GCP (6+ Years Experience) Location: Remote Employment Type: Full-Time Experience Level: Mid to Senior Level (6+ Years) Job Summary: We are looking for a highly skilled DevOps Engineer with 6+ years of experience , specializing in Google Cloud Platform (GCP) , to join our growing team. The ideal candidate will have a strong background in cloud infrastructure, CI/CD, automation, and system reliability. You will work closely with development, QA, and IT teams to design and maintain scalable, reliable, and secure environments. Key Responsibilities: Design, implement, and manage scalable infrastructure on GCP using best practices. Develop and maintain CI/CD pipelines using tools such as Jenkins, GitLab CI, Cloud Build , etc. Automate infrastructure provisioning using Terraform / Deployment Manager / Ansible . Manage containerized workloads using Kubernetes (GKE) and Docker . Implement and monitor system performance, availability, and reliability with tools like Prometheus, Grafana, Stackdriver (Cloud Monitoring) . Ensure security best practices in cloud architecture including IAM, networking, secrets management, and compliance . Perform regular system updates, patching, and backup tasks. Participate in on-call rotations and incident response as needed. Collaborate with development teams to streamline code releases and reduce deployment friction. Required Skills and Qualifications: Bachelor's degree in Computer Science, Information Technology, or related field. 6+ years of experience in DevOps, Site Reliability Engineering, or Infrastructure roles. 3+ years of hands-on experience with Google Cloud Platform (GCP) . Proficiency in infrastructure as code tools like Terraform . Strong experience in CI/CD pipeline setup and automation . Solid experience with Kubernetes (GKE) and containerization technologies. Scripting experience in Python, Bash, or Go . Experience with cloud monitoring and logging tools . Strong understanding of networking, security, and system administration in cloud environments. Preferred Qualifications: GCP Certifications (e.g., Professional Cloud DevOps Engineer or Cloud Architect ). Experience with multi-cloud environments (AWS, Azure). Experience implementing DevSecOps practices. Familiarity with service mesh technologies like Istio or Linkerd. Exposure to Agile/Scrum methodologies . Soft Skills: Excellent problem-solving and troubleshooting abilities. Strong communication and collaboration skills. Ability to work in fast-paced, dynamic environments. Team-oriented and proactive attitude. Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

Remote

At Armor, we are committed to making a meaningful difference in securing cyberspace. Our vision is to be the trusted protector and de facto standard that cloud-centric customers entrust with their risk. We strive to continuously evolve to be the best partner of choice, breaking norms and tirelessly innovating to stay ahead of evolving cyber threats and reshaping how we deliver customer outcomes. We are passionate about making a positive impact in the world, and we’re looking for a highly skilled and experienced talent to join our dynamic team. Armor has unique offerings to the market so customers can a) understand their risk b) leverage Armor to co-manage their risk or c) completely outsource their risk to Armor. Learn more at: https://www.armor.com Summary We are seeking a highly skilled Cloud Engineer with expertise in Oracle Cloud Infrastructure (OCI), Microsoft Azure, and VMware, along with strong Linux experience and hands-on proficiency in Kubernetes and Microservices architecture. The ideal candidate will design, deploy, and manage cloud environments, ensuring high availability, security, and scalability for mission-critical applications. ESSENTIAL DUTIES AND RESPONSIBILITIES (Additional duties may be assigned as required) Design, implement, and manage cloud solutions on Oracle Cloud Infrastructure (OCI) and Azure. Configure and optimize VMware-based virtualized environments for cloud and on-premises deployments. Architect scalable, high-performance, and cost-effective cloud-based solutions. Deploy, configure, and maintain Kubernetes clusters for containerized applications. Design, build, and support Microservices architectures using containers and orchestration tools. Implement service mesh technologies (e.g., Istio, Linkerd) for microservices networking and security. Manage and optimize Linux-based systems, ensuring reliability and security. Automate infrastructure provisioning and configuration using Terraform, Ansible, or other Infrastructure-as-Code (IaC) tools. Implement CI/CD pipelines for automated application deployments and infrastructure updates. Ensure cloud security best practices, including identity and access management (IAM), encryption, and compliance standards. Monitor and enhance network security policies across cloud platforms. Proactively monitor cloud infrastructure performance, ensuring optimal uptime and responsiveness. Troubleshoot complex cloud, networking, and containerization issues. Work closely with DevOps, security, and development teams to optimize cloud-based deployments. Document architectures, processes, and troubleshooting guides for cloud environments. Required Skills 7+ years of experience in cloud engineering or a related field. Hands-on experience with Oracle Cloud Infrastructure (OCI) and Microsoft Azure. Expertise in VMware virtualization technologies. Strong Linux system administration skills. Proficiency in Kubernetes (EKS, AKS, OKE, or self-managed clusters). Experience with Microservices architecture and containerization (Docker, Helm, Istio). Knowledge of Infrastructure as Code (IaC) tools like Terraform or Ansible. Strong scripting skills (Bash, Python, or PowerShell). Familiarity with networking concepts (VPC, subnets, firewalls, DNS, VPN). Preferred Qualifications Certifications in OCI, Azure, Kubernetes (CKA/CKAD), or VMware. Experience with serverless computing and event-driven architectures. Knowledge of logging and monitoring tools (Prometheus, Grafana, ELK, Azure Monitor). WHY ARMOR Join Armor if you want to be part of a company that is redefining cybersecurity. Here, you will have the opportunity to shape the future, disrupt the status quo, and be a part of a team that celebrates energy, passion, and fresh thinking. We are not looking for someone who simply fills a role – we want talent who will help us write the next chapter of our growth story. Armor Core Values Commitment to Growth: A growth mindset that encourages continuous learning and improvement with adaptability in the face of challenges. Integrity Always: Sustain trust through transparency + honesty in all actions and interactions regardless of circumstances. Empathy In Action: Active understanding, compassion and support to the needs of others through genuine connection. Immediate Impact: Taking initiative with swift, informed actions to deliver positive outcomes. Follow-Through: Dedication to delivering finished results with attention to quality and detail to achieve the desired outcomes. WORK ENVIRONMENT The work environment characteristics described here are representative of those an employee encounters while performing the essential functions of this job. The noise level in the work environment is usually low to moderate. The work environment can be either in an office setting or remotely from anywhere. Equal opportunity employer - it is the policy of the company to comply with all employment laws and to afford equal employment opportunity to individuals in all aspects of employment, including in selection for job opportunities, without regard to race, color, religion, sex, national origin, age, disability, genetic information, veteran status, or any other consideration protected by federal, state or local laws. Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

On-site

Shape the Future of Secure Microservice Communication! Are you a hands-on Platform Engineer passionate about cutting-edge technology and eager to build the next generation of resilient and secure applications? Join our dynamic Platform Engineering team and become our Kong Mesh Maestro! The Opportunity: We're seeking a talented and driven Kong Mesh Engineer to spearhead the design, implementation, and management of our service mesh infrastructure, powered by Kong Mesh (based on Kuma). You'll be at the forefront of enabling secure, scalable, and observable communication between our mission-critical microservices, all within a fast-paced banking domain environment. If you thrive on solving complex challenges and have a knack for creating robust and efficient systems, this is your chance to make a significant impact. What You'll Be Doing: Architect, deploy, and maintain our Kong Mesh infrastructure, ensuring seamless service-to-service communication across Kubernetes and hybrid environments. Dive deep into configuring and optimizing mesh features, including mutual TLS (mTLS), intelligent traffic routing, robust circuit breakers, precise rate limiting, and comprehensive observability. Become a trusted partner to our development teams and Site Reliability Engineers (SREs), guiding them through the journey of onboarding their applications onto the service mesh. Embrace automation! You'll build and refine our CI/CD pipelines and Infrastructure as Code (IaC) practices to automate service mesh deployments and configurations. Keep a watchful eye on the mesh performance, proactively troubleshoot any hiccups, and fine-tune configurations to achieve peak efficiency. Champion security by implementing and enforcing mesh policies to ensure consistently secure communication between all microservices. Share your expertise by creating clear and comprehensive documentation for configurations, workflows, and best practices, empowering the entire organization. What You'll Bring: A Bachelor's degree in Computer Science, Engineering, or a related discipline. 3+ years of hands-on experience in DevOps, SRE, or platform/infrastructure engineering roles. Proven experience implementing and managing Kong Mesh (or similar service mesh technologies like Istio, Linkerd, Kuma). A strong grasp of Kubernetes, Envoy proxy, and the intricacies of container orchestration. Deep understanding of microservices architecture and the critical aspects of network security, including mTLS and Role-Based Access Control (RBAC). Familiarity with CI/CD pipelines, Git version control, and Infrastructure as Code (IaC) tools such as Terraform or Helm. Fluency in at least one scripting language (e.g., Bash, Python, or Go). If you are ready to take on this exciting challenge and contribute to building a world-class platform, we encourage you to apply! Show more Show less

Posted 1 month ago

Apply

7.0 years

0 Lacs

India

On-site

Senior DevOps Consultant About the Role We are seeking an experienced Senior DevOps Consultant to join our team of technology professionals. The ideal candidate will bring extensive expertise in DevOps practices, cloud platforms, and infrastructure automation, with a strong focus on implementing continuous integration/continuous delivery (CI/CD) pipelines and cloud-native solutions. This role requires an individual who can lead complex transformation initiatives, provide technical mentorship, and deliver high-quality solutions for our enterprise clients. Position Overview As a Senior DevOps Consultant, you will serve as a technical leader and subject matter expert for our DevOps practice. You will be responsible for designing, implementing, and optimizing DevOps strategies and toolchains, with particular emphasis on automation, infrastructure as code, and cloud-native architectures. You will work directly with clients to understand their business requirements and translate them into effective technical solutions that enhance development velocity, operational efficiency, and system reliability. Key Responsibilities Lead complex DevOps transformation initiatives and implementation projects for enterprise clients Design and implement comprehensive CI/CD pipelines across various platforms and technologies Develop and implement infrastructure as code solutions using tools like Terraform, Ansible, or ARM templates Create cloud-native architectures leveraging container technologies and orchestration platforms Provide technical mentorship and guidance to junior team members and client development teams Develop automation solutions to streamline build, test, and deployment processes Conduct technical assessments of existing environments and provide strategic recommendations Create detailed technical documentation and knowledge transfer materials Implement monitoring, observability, and security solutions within DevOps practices Present technical concepts and solutions to client stakeholders at all levels Stay current with DevOps methodologies, tools, and emerging technologies Required Qualifications 7+ years of hands-on experience in IT, with at least 5 years specifically focused on DevOps practices Advanced expertise in CI/CD tools and methodologies (e.g., Azure DevOps, Jenkins, GitLab CI, GitHub Actions) Strong experience with at least one major cloud platform (Azure, AWS, or GCP), preferably Azure In-depth knowledge of infrastructure as code using tools like Terraform, ARM templates, or CloudFormation Expertise in containerization technologies (Docker) and orchestration platforms (Kubernetes) Deep understanding of scripting and automation using PowerShell, Bash, Python, or similar languages Experience implementing monitoring and observability solutions Strong understanding of networking concepts and security best practices Experience with version control systems, particularly Git-based workflows Demonstrated experience leading DevOps transformation initiatives at enterprise scale Excellent documentation, communication, and presentation skills Ability to translate complex technical concepts for non-technical stakeholders Required Certifications Microsoft Certified: DevOps Engineer Expert Microsoft Certified: Azure Administrator Associate Microsoft Certified: Azure Solutions Architect Expert or AWS Certified Solutions Architect Preferred Qualifications Experience with multiple cloud platforms (Azure, AWS, GCP) Knowledge of database management and data pipeline implementations Experience with service mesh technologies (e.g., Istio, Linkerd) Understanding of microservices architectures and design patterns Experience with security scanning and DevSecOps implementation Background in mentoring junior consultants and developing team capabilities Experience with Site Reliability Engineering (SRE) practices Preferred Certifications Certified Kubernetes Administrator (CKA) or Certified Kubernetes Application Developer (CKAD) HashiCorp Certified: Terraform Associate AWS Certified DevOps Engineer - Professional (if working with AWS) Google Professional Cloud DevOps Engineer (if working with GCP) Professional Skills Exceptional problem-solving and troubleshooting abilities Strong project management and organizational skills Excellent verbal and written communication Client-focused mindset with strong consulting capabilities Ability to work both independently and as part of a team Adaptability and willingness to learn new technologies Strong time management and prioritization skills Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview PepsiCo eCommerce has an opportunity for a Cloud Infrastructure security or DevSecOps engineer focused on our applications running in Azure and AWS. You will be part of the DevOps and cloud infrastructure team that is responsible for Cloud security, infrastructure provisioning, maintaining existing platforms and provides our partner teams with guidance for building, maintain and optimizing integration and deployment pipelines as code for deploying our applications to run in AWS & Azure. This role offers many exciting challenges and opportunities, some of the major duties are: Work with engineering teams to develop and improve our CI / CD pipelines that enforce proper versioning and branching practices using technologies like Github, Github Actions, ArgoCD, Kubernetes, Docker and Terraform. Create, deploy & maintain Kubernetes based platforms for a variety of different workloads in AWS and Azure. Responsibilities Deploy infrastructure in Azure & AWS cloud using terraform and Infra-as-code best practices. Participate in development of Ci/CD workflows to launch application from build to deployment using modern devOps tools like Kubernetes, ArgoCD/Flux, terraform, helm. Ensure the highest possible uptime for our Kubernetes based developer productivity platforms. Partner with development teams to recommend best practices for application uptime and recommend best practices for cloud native infrastructure architecture. Collaborate in infra & application architecture discussions decision making that is part of continually improving and expanding these platforms. Automate everything. Focus on creating tools that make your life easy and benefit the entire org and business. Evaluate and support onboarding of 3rd party SaaS applications or work with teams to integrate new tools and services into existing apps. Create documentation, runbooks, disaster recovery plans and processes. Collaborate with application development teams to perform RCA. Implement and manage threat detection protocols, processes and systems. Conduct regular vulnerability assessments and ensure timely remediation of flagged incidents. Ensure compliance with internal security policies and external regulations like PCI. Lead the integration of security tools such as Wiz, Snyk, DataDog and others within the Pepsico infrastructure. Coordinate with PepsiCo's broader security teams to align Digital Commerce security practices with corporate standards. Provide security expertise and support to various teams within the organization. Advocate and enforce security best practices, such as RBAC and the principle of least privilege. Continuously review, improve and document security policies and procedures. Participate in on-call rotation to support our NOC and incident management teams. Qualifications 8+ years of IT Experience. 5+ year of Kubernetes, ideally running workloads in a production environment on AKS or EKS platforms. 4+ year of creating Ci/CD pipelines in any templatized format in Github, Gitlab or Azure ADO. 3+ year of Python, bash and any other OOP language. (Please be prepared for coding assessment in your language of choice.) 5+ years of experience deploying infrastructure to Azure platforms. 3+ year of experience with using terraform or writing terraform modules. 3+ year of experience with Git, Gitlab or GitHub. 2+ year experience as SRE or supporting micro services in containerized environment like Nomad, docker swarn or K8s. Kubernetes certifications like KCNA, KCSA, CKA, CKAD or CKS preferred Good understanding of software development lifecycle. Familiarity with: Site Reliability Engineering AWS, Azure, or similar cloud platforms Automated build process and tools Service Mesh like Istio, linkerd Monitoring tools like Datadog, Splunk etc. Able to administer and run basic SQL queries in Postgres, mySQL or any relational database. Current skills in following technologies: Kubernetes Terraform AWS or Azure (Azure Preferred). GitHub Actions or Gitlab workflow. Familiar with Agile processes and tools such as Jira; good to have experience being part of Agile teams, continuous integration, automated testing, and test-driven development BSc/MSc in computer science, software engineering or related field is a plus, alternatively completion of a devOps or Infrastructure training course or bootcamp is acceptable as well. Self-starter; bias for action and for quick iteration on ideas / concepts; strong interest in proving out ideas and technologies with rapid prototyping Ability to interact well across various engineering teams Team player; excellent listening skills; welcoming of ideas and new ways of looking at things; able to efficiently take part in brainstorming sessions Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Senior Associate Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a business application consulting generalist at PwC, you will provide consulting services for a wide range of business applications. You will leverage a broad understanding of various software solutions to assist clients in optimising operational efficiency through analysis, implementation, training, and support. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Devops Developer , will provide you with the opportunity to help our clients leverage to enhance their customer Responsibilities: . CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry ) Mandatory skill sets: . CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry) Preferred skill sets: CI/CD(Jenkins/Azure DevOps/goCD/ArgoCD) 2. Containerization(Docker, Kubernetes) 3. Cloud & observability (AWS, Terraform, AWS CDK, ElasticStack, Istio, Linkerd, OpenTelemetry) Years of experience required: 4+Yrs Education qualification: BE/B.Tech/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration, Bachelor of Engineering Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills CI/CD Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment, Performance Management Software {+ 16 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Hyderabad, Telangana, India Category: Information Technology Hire Type: Employee Job ID 9330 Date posted 02/24/2025 We Are: At Synopsys, we drive the innovations that shape the way we live and connect. Our technology is central to the Era of Pervasive Intelligence, from self-driving cars to learning machines. We lead in chip design, verification, and IP integration, empowering the creation of high-performance silicon chips and software content. Join us to transform the future through continuous technological innovation. You Are: You are a forward-thinking Cloud DevOps Engineer with a passion for modernizing infrastructure and enhancing the capabilities of CI/CD pipelines, containerization strategies, and hybrid cloud deployments. You thrive in environments where you can leverage your expertise in cloud infrastructure, distributed processing workloads, and AI-driven automation. Your collaborative spirit drives you to work closely with development, data, and GenAI teams to build resilient, scalable, and intelligent DevOps solutions. You are adept at integrating cutting-edge technologies and best practices to enhance both traditional and AI-driven workloads. Your proactive approach and problem-solving skills make you an invaluable asset to any team. What You’ll Be Doing: Designing, implementing, and optimizing CI/CD pipelines for cloud and hybrid environments. Integrating AI-driven pipeline automation for self-healing deployments and predictive troubleshooting. Leveraging GitOps (ArgoCD, Flux, Tekton) for declarative infrastructure management. Implementing progressive delivery strategies (Canary, Blue-Green, Feature Flags). Containerizing applications using Docker & Kubernetes (EKS, AKS, GKE, OpenShift, or on-prem clusters). Optimizing service orchestration and networking with service meshes (Istio, Linkerd, Consul). Implementing AI-enhanced observability for containerized services using AIOps-based monitoring. Automating provisioning with Terraform, CloudFormation, Pulumi, or CDK. Supporting and optimizing distributed computing workloads, including Apache Spark, Flink, or Ray. Using GenAI-driven copilots for DevOps automation, including scripting, deployment verification, and infra recommendations. The Impact You Will Have: Enhancing the efficiency and reliability of CI/CD pipelines and deployments. Driving the adoption of AI-driven automation to reduce downtime and improve system resilience. Enabling seamless application portability across on-prem and cloud environments. Implementing advanced observability solutions to proactively detect and resolve issues. Optimizing resource allocation and job scheduling for distributed processing workloads. Contributing to the development of intelligent DevOps solutions that support both traditional and AI-driven workloads. What You’ll Need: 5+ years of experience in DevOps, Cloud Engineering, or SRE. Hands-on expertise with CI/CD pipelines (Jenkins, GitHub Actions, GitLab CI, ArgoCD, Tekton, etc.). Strong experience with Kubernetes, container orchestration, and service meshes. Proficiency in Terraform, CloudFormation, Pulumi, or Infrastructure as Code (IaC) tools. Experience working in hybrid cloud environments (AWS, Azure, GCP, on-prem). Strong scripting skills in Python, Bash, or Go. Knowledge of distributed data processing frameworks (Spark, Flink, Ray, or similar). Who You Are: You are a collaborative and innovative professional with a strong technical background and a passion for continuous learning. You excel in problem-solving and thrive in dynamic environments where you can your expertise to drive significant improvements. Your excellent communication skills enable you to work effectively with diverse teams, and your commitment to excellence ensures that you consistently deliver high-quality results. The Team You’ll Be A Part Of: You will join a dynamic team focused on optimizing cloud infrastructure and enhancing workloads to contribute to overall operational efficiency. This team is dedicated to driving the modernization and optimization of Infrastructure CI/CD pipelines and hybrid cloud deployments, ensuring that Synopsys remains at the forefront of technological innovation. Rewards and Benefits: We offer a comprehensive range of health, wellness, and financial benefits to cater to your needs. Our total rewards include both monetary and non-monetary offerings. Your recruiter will provide more details about the salary range and benefits during the hiring process. At Synopsys, we want talented people of every background to feel valued and supported to do their best work. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, age, military veteran status, or disability.

Posted 1 month ago

Apply

20.0 - 22.0 years

20 - 22 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Position Summary As the Group Director of Software Engineering for Runtime Platforms at Walmart Global Tech, you will be at the helm of foundational platform transformation. This is a high-impact leadership role overseeing the design, development, and global delivery of Walmart's next-generation Runtime and Traffic Management platforms. You will lead a global team responsible for critical platform functionscontainer and VM management, configuration and secrets, and all ingress/egress traffic for Walmart applications. Your strategic and technical expertise will directly support business continuity, security, operational excellence, and customer satisfaction at scale. About the Team The Global Technology Platform (GTP) powers Walmart's digital transformation and is the engine behind every customer experience. The Runtime Platform (RTP) is core to this foundationenabling scalable compute, seamless deployment, secure traffic routing, and service observability across Walmart's multicloud environments. You will focus primarily on the Traffic Management team , which handles: Global DNS and API Gateway Software load balancing CDN, edge computing, and eBPF-based traffic optimization Routing and security policies at scale Handling 10M+ requests per second across Walmart's digital footprint This is a critical, highly performant system supporting mission-critical traffic flows for Walmart globally. What You'll Do Lead and grow an elite engineering team responsible for Walmart's global runtime and traffic platforms. Drive strategy and delivery of Walmart's container/VM orchestration, ingress/egress management, configuration and secrets storage systems. Define, evangelize, and execute on a unified traffic visioncombining routing, DNS, security, observability, and scalability. Collaborate with product, operations, security, and business leadership to align technology strategy with Walmart's digital goals. Architect and modernize a world-class traffic platform using service mesh, eBPF, software load balancers, and multi-cloud routing policies. Create a platform that supports dynamic traffic shaping (rate limiting, circuit breakers, segmentation, etc.) while improving latency, resilience, and cost efficiency. Establish high standards for availability, observability, automation, and disaster recovery. Lead platform engineering with a strong developer-first approach, ensuring self-service, reliability, and documentation. Convert CxO-level strategic goals into measurable engineering OKRs and deliver through strong governance and ownership. What You'll Bring 20+ years of engineering leadership with a proven track record of building and operating large-scale distributed systems/platforms. Deep experience in platform engineering, especially in traffic management, container orchestration (Kubernetes) , VMs , and networking . Practical expertise in: Cloud-native networking (GCP, Azure, AWS) Service mesh technologies (e.g., Envoy, Istio, Linkerd) TCP/IP, HTTP(S), DNS, Load Balancing Rate limiting, DDoS mitigation, routing optimization Edge computing and CDN architectures Prior success leading engineering organizations of 100+ , including senior architects and engineering managers. Strategic mindset with the ability to define and implement long-term vision across globally distributed teams. Ability to distill complex technical architectures for executive audiences and align cross-functional stakeholders. Strong focus on metrics, observability, automation, and operational excellence (including fast service restoration SLAs). Excellent communication, collaboration, and executive influencing skills. Prior experience contributing to or leading open-source projects is a plus. Bachelor's or Master's in Computer Science, Engineering, or related field. Preferred Qualifications Experience with: eCommerce platforms Open-source contributions in traffic/network domains Agile at scale (SAFe or similar) Security-first design in platform engineering Master's degree in Computer Science or equivalent

Posted 2 months ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies