Jobs
Interviews

966 Gitops Jobs - Page 23

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

20 - 30 Lacs

Bengaluru

Remote

Required Skills & Qualifications: 8+ years of IT experience with at least 5 years in Salesforce development/administration roles. 3+ years of proven experience managing Salesforce DevOps processes. Hands-on expertise in Gearset (preferred) and experience with other deployment tools (e.g., Copado, AutoRABIT, Flosum). Deep understanding of GitOps principles , including Git-based version control, pull requests, code reviews, and branching strategies. Proficiency with CI/CD tools (e.g., Jenkins, GitHub Actions, Azure DevOps). Strong knowledge of Salesforce metadata, APIs, and packaging (Unlocked/Managed Packages). Experience managing multiple Salesforce environments and coordinating large-scale releases. Solid grasp of Agile methodologies and working in a sprint-based development environment.

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Sprinklr is a leading enterprise software company for all customer-facing functions. With advanced AI, Sprinklr's unified customer experience management (Unified-CXM) platform helps companies deliver human experiences to every customer, every time, across any modern channel. Headquartered in New York City with employees around the world, Sprinklr works with more than 1,000 of the world’s most valuable enterprises — global brands like Microsoft, P&G, Samsung and more than 50% of the Fortune 100. Learn more about our culture and how we make our employees happier through The Sprinklr Way. Job Description What Does Success Look Like? We are hiring a Lead Backend Engineer to drive the design and development of backend services and APIs that power our CPaaS platform . You’ll work on high-throughput, event- driven systems supporting voice routing, provisioning, billing, analytics, and more. This role demands a strong foundation in distributed systems, API design, and real-time architecture. Seniority Level: Seniority Level: Backend Lead / Tech Owner What You’ll Do: Design and implement scalable RESTful APIs and backend services for CPaaS workflows (number provisioning, SIP trunking, user auth, call logs, etc.). Work closely with VoIP team to expose APIs for call control, diagnostics, and session tracking. Build asynchronous workflows using message queues (Kafka, RabbitMQ, or SQS). Own database models, caching strategies, retry logic, and service reliability patterns. Ensure system observability with structured logging, metrics, tracing, alerts. Partner with QA to build automated tests, mocks, and integration coverage. Contribute to internal documentation, runbooks, and deployment playbooks. What Makes You Qualified? 5+ years of hands-on experience in backend development in distributed systems. Strong systems programming and debugging skills in Java. Solid expertise in REST APIs and micro service architecture. Hands-on experience with MongoDB, PostgreSQL, Redis, and API rate limiting strategies. Understanding of distributed systems patterns — retries, idempotency, circuit breakers. Familiarity with CI/CD, GitOps, containerization, and cloud deployment (AWS preferred). Why You'll Love Sprinklr: We're committed to creating a culture where you feel like you belong, are happier today than you were yesterday, and your contributions matter. At Sprinklr, we passionately, genuinely care. For full-time employees, we provide a range of comprehensive health plans, leading well-being programs, and financial protection for you and your family through a range of global and localized plans throughout the world. For more information on Sprinklr Benefits around the world, head to https://sprinklrbenefits.com/ to browse our country-specific benefits guides. We focus on our mission: We founded Sprinklr with one mission: to enable every organization on the planet to make their customers happier. Our vision is to be the world’s most loved enterprise software company, ever. We believe in our product: Sprinklr was built from the ground up to enable a brand’s digital transformation. Its platform provides every customer-facing team with the ability to reach, engage, and listen to customers around the world. At Sprinklr, we have many of the world's largest brands as our clients, and our employees have the opportunity to work closely alongside them. We invest in our people: At Sprinklr, we believe every human has the potential to be amazing. We empower each Sprinklrite in the journey toward achieving their personal and professional best. For wellbeing, this includes daily meditation breaks, virtual fitness, and access to Headspace. We have continuous learning opportunities available with LinkedIn Learning and more. EEO - Our philosophy: Our goal is to ensure every employee feels like they belong and are operating in a judgment-free zone regardless of gender, race, ethnicity, age, and lifestyle preference, among others. We value and celebrate diversity and fervently believe every employee matters and should be respected and heard. We believe we are stronger when we belong because collectively, we’re more innovative, creative, and successful. Sprinklr is proud to be an equal-opportunity workplace and is an affirmative-action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. See also Sprinklr’s EEO Policy and EEO is the Law. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

On-site

Job Summary: We are seeking an experienced Azure DevOps Engineer with strong knowledge of CI/CD pipelines and Kubernetes. The candidate should be proficient in automating infrastructure, managing deployments, and maintaining a scalable cloud-native environment on Microsoft Azure. Key Responsibilities: Design, implement, and manage CI/CD pipelines using Azure DevOps. Automate infrastructure provisioning using IaC tools (e.g., ARM, Bicep, Terraform). Manage and scale Kubernetes clusters on Azure (AKS). Monitor application performance and troubleshoot deployment issues. Collaborate with development, QA, and security teams to ensure seamless delivery. Implement security and compliance best practices in the DevOps process. Maintain version control, build, release management, and artifact repositories. Support containerization and orchestration of applications. Key Skills: Strong hands-on experience with Azure DevOps (Pipelines, Repos, Artifacts) Expertise in Kubernetes (preferably AKS) and Docker Proficiency in scripting (PowerShell, Bash, or Python) Experience with IaC tools (Terraform, ARM Templates, Bicep) Good understanding of Azure services : App Services, Key Vault, Storage, etc. Knowledge of monitoring tools like Azure Monitor , App Insights , or Prometheus/Grafana Familiarity with GitOps , Agile methodologies , and DevSecOps practices Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Thiruvananthapuram, Kerala, India

Remote

We're Hiring: Senior DevOps Engineer | LogicPlum 📍 Location: Trivandrum 🕒 Experience: 3–5 Years 💼 Type: Full-Time About Us LogicPlum is an advanced AI platform that enables businesses to harness the full power of machine learning without needing a team of data scientists. We make AI accessible, scalable, and enterprise-ready — empowering organizations to build, deploy, and manage predictive models with speed and precision. With LogicPlum, companies can make smarter decisions, reduce operational costs, and unlock new value from their data. We are a team of innovators, problem-solvers, and engineers committed to simplifying the complexity of AI for real-world impact. As we scale, we’re looking for a Senior DevOps Engineer to strengthen our engineering team and ensure our infrastructure is robust, scalable, and secure. Role Overview We’re seeking a highly skilled and experienced Senior DevOps Engineer who thrives in cloud-native environments. This role demands hands-on expertise in Azure Infrastructure , Kubernetes , Docker , and setting up end-to-end CI/CD pipelines . You’ll play a critical role in maintaining, scaling, and automating our infrastructure to support rapid growth and high availability. Key Responsibilities 🔹 Design, build, and maintain secure and scalable infrastructure on Microsoft Azure 🔹 Deploy and manage containerized applications using Kubernetes (AKS) and Docker 🔹 Build and maintain CI/CD pipelines for seamless code integration and deployment using tools like GitHub Actions, Azure DevOps, Jenkins, or similar 🔹 Automate infrastructure provisioning and configuration using Terraform , Ansible , or similar tools 🔹 Monitor system performance, cost optimization, and implement logging and alerting (Prometheus, Grafana, ELK, etc.) 🔹 Ensure security best practices, compliance, and data protection standards 🔹 Collaborate closely with software engineers, data scientists, and QA teams to streamline development workflows 🔹 Perform regular system audits, updates, and disaster recovery simulations 🔹 Mentor junior team members and contribute to technical decision-making Must-Have Skills ✅ 3–5 years of hands-on experience in a Senior DevOps or similar role ✅ Strong proficiency in Microsoft Azure cloud services and architecture ✅ Expertise in Kubernetes (AKS preferred) and Docker containerization ✅ Solid experience in setting up and managing CI/CD pipelines ✅ Infrastructure as Code (IaC) knowledge – Terraform or Bicep ✅ Scripting skills in Bash , Python , or PowerShell ✅ Strong understanding of security , networking , and system architecture ✅ Familiarity with monitoring and alerting tools (Datadog, Prometheus, Azure Monitor, etc.) ✅ Basic database knowledge and ability to work with SQL in MySQL or PostgreSQL ✅ Experience managing Azure Entra ID (formerly Azure Active Directory ) including role-based access control ( RBAC ), identity and access management ( IAM ), and enterprise-level policy enforcement Nice-to-Have Skills ➕ Experience with other cloud providers (AWS, GCP) ➕ Knowledge of microservices, service mesh, and API gateways ➕ Experience with GitOps practices and tools like ArgoCD or Flux ➕ Exposure to machine learning or AI-driven workflows (bonus in LogicPlum context) Why Join LogicPlum? ✨ Work with cutting-edge AI and cloud technologies ✨ Be part of a supportive, ambitious, and fast-moving team ✨ Great work culture with flexibility ✨ Competitive compensation & growth opportunities Ready to take your DevOps career to the next level? Apply now or tag someone who fits the role! 📧 talent@logicplum.com #Hiring #DevOpsEngineer #Azure #Kubernetes #Docker #CI_CD #LogicPlum #CloudJobs #SeniorDevOps #TechJobs #InfrastructureEngineering #WeAreHiring #RemoteJobs Show more Show less

Posted 1 month ago

Apply

5.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

An intermediate level position responsible for participation in the establishment and implementation of new or revised application systems and programs in coordination with the Technology team. The overall objective of this role is to contribute to applications systems analysis and programming activities. Responsibilities: Utilize knowledge of applications development procedures and concepts, and basic knowledge of other technical areas to identify and define necessary system enhancements, including using script tools and analyzing/interpreting code Consult with users, clients, and other technology groups on issues, and recommend programming solutions, install, and support customer exposure systems Apply fundamental knowledge of programming languages for design specifications. Analyze applications to identify vulnerabilities and security issues, as well as conduct testing and debugging Serve as advisor or coach to new or lower level analysts Identify problems, analyze information, and make evaluative judgements to recommend and implement solutions Resolve issues by identifying and selecting solutions through the applications of acquired technical experience and guided by precedents Has the ability to operate with a limited level of direct supervision. Can exercise independence of judgement and autonomy. Acts as SME to senior stakeholders and /or other team members. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5 to 9 years of relevant experience. Excellent working knowledge of OpenShift - Subject Matter Expert. Operational experience of deploying and running infrastructure and services at scale on top of OpenShift, Kubernetes, Docker. Operational experience with orchestration tools for CI/CD and Infrastructure-as-Code tooling (Ansible, Terraform). Excellent working knowledge of key computer science concepts (networking, operating systems, storage, virtualization, containerization, etc.). Excellent understanding of Software Engineering concepts like Software Development Life Cycle and GitOps. Excellent debugging and analytical skills: ability to isolate root cause across networking/infrastructure, application, and database stacks. Education Bachelor’s degree/University degree or equivalent experience This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are seeking a highly skilled and experienced Lead Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecting infrastructure that survives and thrives in production. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance vs spend. You will also provide technical leadership and mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS) Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments Support deployment of infrastructure lambda functions Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments Ensure auditability and observability of pipeline states Implement security best practices, audit, and compliance requirements within the infrastructure Provide technical leadership, mentorship, and training to engineering staff Engage with clients to understand their technical and business requirements, and provide tailored solutions. If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads Troubleshoot and resolve complex infrastructure issues. Potentially participate in pre-sales activities and provide technical expertise to sales teams Qualifications: 10+ years of experience in an Infrastructure Engineer or similar role Extensive experience with Amazon Web Services (AWS) Proven ability to architect for scale, availability, and high-performance workloads Ability to plan and execute zero-disruption migrations Experience with enterprise IAM and familiarity with authentication technology such as OAuth2 and OIDC Deep knowledge of Infrastructure as Code (IaC) with Terraform and/or AWS CDK Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD) Solid understanding of git, branching models, CI/CD pipelines and deployment strategies Experience with security, audit, and compliance best practices Excellent problem-solving and analytical skills Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders Experience in technical leadership, mentoring, team-forming and fostering self-organization and ownership Experience with client relationship management and project planning Certifications: Relevant certifications (for example Kubernetes Certified Administrator, AWS Certified Solutions Architect - Professional, AWS Certified DevOps Engineer - Professional etc.) Software development experience (for example Terraform, Python) Experience with machine learning infrastructure Education: B.Tech /BE in computer science, a related field or equivalent experience Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Summary: We are seeking a highly skilled and experienced Senior Infrastructure Engineer to join our dynamic team. The ideal candidate will be passionate about building and maintaining complex systems, with a holistic approach to architecture. You will play a key role in designing, implementing, and managing cloud infrastructure, ensuring scalability, availability, security, and optimal performance. You will also provide mentorship to other engineers, and engage with clients to understand their needs and deliver effective solutions. Responsibilities: Design, architect, and implement scalable, highly available, and secure infrastructure solutions, primarily on Amazon Web Services (AWS) Develop and maintain Infrastructure as Code (IaC) using Terraform or AWS CDK for enterprise-scale maintainability and repeatability Implement robust access control via IAM roles and policy orchestration, ensuring least-privilege and auditability across multi-environment deployments Contribute to secure, scalable identity and access patterns, including OAuth2-based authorization flows and dynamic IAM role mapping across environments Support deployment of infrastructure lambda functions Troubleshoot issues and collaborate with cloud vendors on managed service reliability and roadmap alignment Utilize Kubernetes deployment tools such as Helm/Kustomize in combination with GitOps tools such as ArgoCD for container orchestration and management Design and implement CI/CD pipelines using platforms like GitHub, GitLab, Bitbucket, Cloud Build, Harness, etc., with a focus on rolling deployments, canaries, and blue/green deployments Ensure auditability and observability of pipeline states Implement security best practices, audit, and compliance requirements within the infrastructure Engage with clients to understand their technical and business requirements, and provide tailored solutions If needed, lead agile ceremonies and project planning, including developing agile boards and backlogs with support from our Service Delivery Leads Troubleshoot and resolve complex infrastructure issues Qualifications: 6+ years of experience in Infrastructure Engineering or similar role Extensive experience with Amazon Web Services (AWS) Proven ability to architect for scale, availability, and high-performance workloads Deep knowledge of Infrastructure as Code (IaC) with Terraform Strong experience with Kubernetes and related tools (Helm, Kustomize, ArgoCD) Solid understanding of git, branching models, CI/CD pipelines and deployment strategies Experience with security, audit, and compliance best practices Excellent problem-solving and analytical skills Strong communication and interpersonal skills, with the ability to engage with both technical and non-technical stakeholders Experience in technical mentoring, team-forming and fostering self-organization and ownership Experience with client relationship management and project planning Certifications: Relevant certifications (e.g., Kubernetes Certified Administrator, AWS Certified Machine Learning Engineer - Associate, AWS Certified Data Engineer - Associate, AWS Certified Developer - Associate, etc.) Software development experience (e.g., Terraform, Python) Experience/Exposure with machine learning infrastructure Education: B.Tech/BE in computer sciences, a related field or equivalent experience Show more Show less

Posted 1 month ago

Apply

14.0 - 18.0 years

60 - 65 Lacs

Hyderabad

Work from Office

Responsibilities: - Design solutions within Azure and Kubernetes (AKS), in line with functional and non-functional requirements, as part of a scrum team. - Ensure that the designs align with the Enterprise IT and reference architecture. - Expand the reference architecture based on evolving needs and industry trends. - Oversee the implementation of the solutions you have designed. - Document architecture decisions for future reference. - Contribute to architecture governance board. Experience and Knowledge: - Minimum of 15 years of IT experience. - Strong knowledge of Cloud Native Architectures and Cloud Design Patterns. - Experience in application architecture based on Azure PaaS services and Azure Kubernetes Service (AKS). - Proficiency in working with container-based solutions, and in particular with AKS - Proficiency in Azure PaaS services such as Azure Functions, API Management, Azure Cache for Redis. - Ability to anticipate Azure costs. - Familiarity with different data stores (SQL DB, NoSQL(Cosmos DB, Redis), etc.) and their advantages and disadvantages. - Understanding of messaging services like Event Hubs, Event Grid, and Service Bus - Familiarity with the Hub & Spoke topology and the most recurrent network building blocks. - Experience with automation and CICD pipelines, in both DevOps and GitOps organizations. - Experience in architecting secured solutions, IAM integration with Azure AD using OpenID Connect, SCIM and OAuth protocols. - Working knowledge of Agile/Scrum methodologies. Experience in a SAFe working environment is a plus.

Posted 1 month ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leverage monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Work with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implement security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Work closely with the engineering and operations teams to design and implement cloud-based solutions. Provide mentorship and support to team members while sharing best practices for cloud engineering. Maintain detailed documentation of cloud architecture and platform configurations and regularly provide status reports, performance metrics, and cost analysis to leadership. What We’re Looking For... You’ll need to have: Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of work experience in Kubernetes administration. Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA & Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.) Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

India

Remote

About The Company We’re a fully female-founded company on a mission to change the way people search and shop online for fashion…forever! We’re going to spark a new era of fashion discovery, igniting confidence in everybody and every body, and to create a world where fashion confidence starts with us. We are at the beginning of an exciting journey and we’re looking for top talent to join our team. Unlike many start-ups we’re well funded, have a detailed business and financial plan, and are looking for experienced, passionate professionals to join us in creating and scaling a game changing business. So if you want a role where you will make a major impact and want to be a part of a team of women building an incredible product and experience for other women, come join us and make the most Savi move of your career! Position Overview: We're looking for an experienced Backend Engineer to architect and maintain our platform core services and infrastructure. This role is perfect for someone who thrives on building scalable, high-performance systems and enjoys empowering cross-functional teams through reliable backend services. You'll design and implement backend services that empower our Product vision, and optimise a cloud infrastructure to ensure our systems are secure, resilient, and ready for growth. Key Responsibilities Technology Strategy: Shape the backend architecture and technical direction supporting our Product vision, implementing business rules and domain models, ensuring scalability, security, and reliability. Backend Service Development: Build and maintain core services for Product Catalogue, user Profiles, and Asset Management. API Design & Development: Create robust, well-documented APIs for internal and external use, with comprehensive contract testing and verification. Build, Deployment & Release Management: Ensure reproducible builds, provisioning, deployment, and releases by leveraging modern techniques such as Infrastructure as Code, GitOps, and other automated deployment methodologies. Observability, High Availability & Scalability: Ensure high system reliability and performance by implementing strong observability practices, including comprehensive metrics collection, structured logging, actionable dashboards, and proactive alerting. Design fault-tolerant systems ready for growth, implementing stress testing and disaster recovery. Security & Access Management: Implement secure authentication and authorization mechanisms while maintaining security compliance and data protection across all systems. Ways of Working & Cross-functional Collaboration: Lead technical proposals through RFCs and contribute to Architecture Decision Records (ADRs) to align engineering efforts. Partner with frontend developers, data engineers, and ML teams to ensure seamless integration. Desirable Experience Please note: You do not need to meet every criteria listed below. Specific technologies are provided as examples of the tools and practices we value. Scalable Backend Development: Designing and implementing scalable backend services using modern programming languages such as Kotlin, Python, Go, or Java with open-source frameworks like Ktor, Quarkus, FastAPI or Gin. Cloud Computing: Experience with AWS services such as ECS, S3, Lambda, API Gateway, and CloudFront, or equivalent cloud platforms. Database Management: Working with SQL (PostgreSQL, MySQL) and NoSQL (DynamoDB, MongoDB) databases, including query optimization and indexing strategies. Microservices & Distributed Systems: Implementing service communication, discovery, load balancing, and resilience using tools such as gRPC, Kafka, Envoy, Istio, or Consul. API Development: Developing public APIs optimized for various use cases, including RESTful, GraphQL, WebSockets, and gRPC-web. Infrastructure & Automation: Managing Infrastructure as Code using Terraform, AWS CDK, or CloudFormation for automated provisioning and deployment. Authentication & Security: Implementing authentication and authorization protocols such as OAuth, JWT, and OpenID Connect. Observability & Logging: Experience with observability tools and logging collection strategies using Prometheus, Grafana, Loki, ELK stack, or Datadog. Performance Optimization & Resilience: Applying performance tuning, caching strategies, circuit breakers, and fault tolerance techniques to ensure reliability and efficiency. Supporting Data Teams: Collaborating with data science, ETL, and web scraping teams by providing scalable and efficient backend solutions. Who you are: Systematic Thinker & Problem Solver: You craft robust architectures, anticipate edge cases, and implement elegant, scalable solutions. Security & Performance Focused: You build systems that are both secure and efficient. Collaborative & Cross-functional: You excel at enabling teams through clear documentation and reliable services. Continuous Learner: You stay current with backend technologies and best practices. Ownership & Autonomy: You take initiative and drive development with minimal oversight. PLEASE NOTE: If you don't meet 100% of the criteria but are passionate about our mission and think you can do the job—especially if our values and work principles resonate with you—we strongly encourage you to apply! Location + Work Style We’ll all be where we need to be based on what’s happening. We’ll have in-person team sessions (usually once a week) as needed for key activities like planning, strategy, and brainstorming sessions (and some fun!), and remote work the rest of the time to allow for flexibility, work-life balance, and quiet time for deep work. Work Schedule As the client is UK-based, you will be required to work in UK daytime: Monday to Friday 14:00 - 22:00 IST (08:30 am - 17:30 GMT) Pay & Benefits - What you’ll get in return: Annual CTC: 20 to 25 lakhs Device: Laptop provided Fully remote role Be part of a passionate, friendly and transparent culture which encourages your suggestions for improvement as we grow Full training and ongoing support An unparalleled opportunity to be a team member of the UK’s largest mobile servicing, maintenance and repair business Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Hyderābād

On-site

Job Description Job Location: Hyderabad Job Duration: Full time Hours: 9:00am to 5:00pm We’re building something solid — and we’re nearly there. Our team has been steadily laying the foundation for a robust DevOps practice to support our Azure-based data platform. The team is in place, core processes are already running, and now we’re ready to level up. The goal is to make deployments faster, more reliable, and less dependent on manual work – so developers can focus on building. We’re looking for a hands-on DevOps Engineer who can work independently, take ownership of topics end-to-end. Key Responsibilities What You’ll Do: Design and implement GitHub actions workflow for Azure databricks; DB solutions; Azure functions; App Services; REST API Solutions (APIOps), Power BI Solutions and AI/ML Solutions (MLOps) Define Pull Request flow including Pull Request, Review, Merging, Build, Acceptance and Deploy Understand the deployment needs of developers and define Git hub actions for each project, which will be used by developers to deploy their code to Production. Propose scalable architecture solutions to support development and operations. Installation of software and configuration of Git Hub Runners. Contribute light infrastructure automation using Terraform when required. Guiding and Co-Operation: Being the “go-to person” for developers, providing them clarifications by understanding the overall architecture setup. Support the operations and development team to organize proper process and to make sure the development is adhered to the process. Technical Experience University degree in Computer Sciences or a similar field of studies 3+ years experience in setting up GitOps process and creating Git Hub Actions. Basic experience with terraform with Infrastructure as Code(IaC). Strong understanding of the following Azure Services: Azure storage account (ADLS), and Azure function apps, App services, databricks hosted in Azure. Background ideally in both data solution development and automation for CI/CD. Very high motivation in helping/guiding teammates to succeed in projects. Fluent in English

Posted 1 month ago

Apply

3.0 years

5 - 6 Lacs

Gurgaon

On-site

Key Responsibilities 1. Cloud Infrastructure & Scalability Architect and support AWS & Azure–based environments across ERP, PLM, SCM, and monitoring products. Optimize services like EC2, S3, Lambda, VPCs, etc., to reduce cost and improve performance. Design and maintain high‑availability and disaster recovery architectures. Implement cross-region S3 replication and snapshot backup automation using Python Lambda functions. 2. CI/CD Pipeline & Containerization Build and maintain end‑to‑end CI/CD pipelines using Jenkins, GitLab CI/CD, Docker & Terraform. Implement GitOps best practices to reduce deployment failures and increase deployment frequency. Work with microservices and migrate monolithic systems into container-based architectures. 3. Infrastructure as Code & Configuration Management Author and maintain IaC with Terraform and CloudFormation. Use Ansible for Linux/Windows config, including automated SSL certificate renewals on IIS. 4. Monitoring, Observability & Logging Implement monitoring and observability via ELK Stack, Prometheus, Grafana, CloudWatch, CloudWatch Logs Insights & SNS alerts. Automate scaling of monitoring tools with Terraform & Kubernetes. 5. Security & Networking Enforce secure networking (VPC, subnets, IAM, NACLs) and best practices in RBAC and secret management. Configure API Gateway, Nginx, rate‑limiting, caching & CloudFront for secure API delivery. 6. Support, Collaboration & Mentorship Troubleshoot production incidents and mentor junior engineers to foster a DevOps culture. Document processes, create runbooks, and promote team-wide automation. Required Skills & Experience Experience: 3+ years in DevOps, SRE, cloud or automation engineering roles. Cloud Platforms: AWS (EC2, S3, Lambda, CloudWatch, VPC), Azure. CI/CD & Containerization: Jenkins, GitLab CI/CD, Docker, Kubernetes (including kubeadm-based clusters). IaC & Config Management: Terraform, Ansible, CloudFormation. Languages: Python, Bash, PowerShell. Monitoring Tools: Prometheus, Grafana, ELK Stack, AWS CloudWatch. Web & API Infrastructure: Nginx, API Gateway, CloudFront. OS & Networking: Linux + Windows admin, VPC, IAM, subnets, security groups. Certifications: AWS Certified Solutions Architect Associate, Azure AZ‑900 (or equivalent). Job Type: Full-time Pay: ₹500,000.00 - ₹600,000.00 per year Benefits: Provident Fund Schedule: Day shift Work Location: In person

Posted 1 month ago

Apply

5.0 years

0 Lacs

Andhra Pradesh

On-site

Overview: We are seeking a skilled and proactive Support Engineer with deep expertise in Azure cloud services, Kubernetes, and DevOps practices. with 5+ years of industry experience with same technologies. The ideal candidate will have experience working with Azure services, including Kubernetes, API management, monitoring tools, and various cloud infrastructure services. You will be responsible for providing technical support, managing cloud-based systems, troubleshooting complex issues, and ensuring smooth operation and optimization of services within the Azure ecosystem. Key Responsibilities: Provide technical support for Azure-based cloud services, including Azure Kubernetes Service (AKS), Azure API Management, Application Gateway, Web Application Firewall, Azure Monitor with KQL queries Manage and troubleshoot various Azure services such as Event Hub, Azure SQL, Application Insights, Virtual Networks and WAF. Work with Kubernetes environments, troubleshoot deployments, utilizing Helm Charts, check resource utilization and managing GitOps processes. Utilize Terraform to automate cloud infrastructure provisioning, configuration, and management. Troubleshoot and resolve issues in MongoDB and Microsoft SQL Server databases, ensuring high availability and performance. Monitor cloud infrastructure health using Grafana and Azure Monitor, providing insights and proactive alerts. Provide root-cause analysis for technical incidents, propose and implement corrective actions to prevent recurrence. Continuously optimize cloud services and infrastructure to improve performance, scalability, and security. Required Skills & Qualifications: Azure Certification (e.g., Azure Solutions Architect, Azure Administrator) with hands-on experience in Azure services such as AKS, API Management, Application Gateway, WAF, and others. Any Kubernetes Certification (e.g CKAD or CKA) with Strong hands-on expertise in Kubernetes Helm Charts, and GitOps principles for managing/toubleshooting deployments. Hands-on experience with Terraform for infrastructure automation and configuration management. Proven experience in MongoDB and Microsoft SQL Server, including deployment, maintenance, performance tuning, and troubleshooting. Familiarity with Grafana for monitoring, alerting, and visualization of cloud-based services. Experience using Azure DevOps tools, including Repos and Pipelines for CI/CD automation and source code management. Strong knowledge of Azure Monitor, KQL Queries, Event Hub, and Application Insights for troubleshooting and monitoring cloud infrastructure. Solid understanding of Virtual Networks, WAF, Firewalls, and other related Azure networking tools. Excellent troubleshooting, analytical, and problem-solving skills. Strong written and verbal communication skills, with the ability to explain complex technical issues to non-technical stakeholders. Ability to work in a fast-paced environment and manage multiple priorities effectively. Preferred Skills: Experience with cloud security best practices in Azure. Knowledge of infrastructure as code (IaC) concepts and tools. Familiarity with containerized applications and Docker. Education: Bachelors degree in Computer Science, Information Technology, or a related field, or equivalent work experience. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Are you passionate about building resilient, scalable, and secure cloud platforms? Join the Platform Engineering team at Xebia , where we are transforming enterprise data landscapes with cutting-edge cloud-native architectures and DevOps-driven delivery. This role is ideal for engineers who thrive at the intersection of Python, Azure, Big Data, and DevOps , and are ready to lead by design and automation. What You’ll Do: Design, build, and automate robust cloud platforms on Azure Enable data-driven architectures using Azure PaaS and Cloudera stack Ensure performance, security, and reliability across scalable systems Drive infrastructure automation and deployment with modern DevOps tooling Collaborate with cross-functional teams to deliver platform solutions at scale Your Tech Superpowers: We’re looking for engineers with hands-on expertise in: 🔹 Programming & Platform Services: Python Azure PaaS: Event Hub, ADF, Azure Functions, Databricks, Synapse, Cosmos DB, ADLS Gen2 🔹 Cloud & Big Data: Microsoft Azure Cloudera ecosystem 🔹 Data Tools & Visualizations: Dataiku, Power BI, Tableau 🔹 DevOps & Infrastructure as Code (IaC): CI/CD, GitOps, Terraform Docker & Kubernetes 🔹 Security & Networking: Inter-service communication & resilient Azure architecture Identity & Access Management (IAM) 🔹 Ways of Working: Agile mindset Clean code, automation-first, test-driven practices Why Join Xebia? Competitive compensation & world-class benefits Work with global clients on modern engineering challenges Upskill through structured learning, certifications & mentorship A culture built on trust, innovation & ownership Freedom to build, lead, and grow without limits Show more Show less

Posted 1 month ago

Apply

7.0 - 9.0 years

7 - 9 Lacs

Delhi, India

On-site

DevOps principles and Agile practices, including Infrastructure as Code (IaC) and GitOps, to streamline and enhance development workflows. Infrastructure Management: Oversee the management of Linux-based infrastructure and understand networking concepts, including microservices communication and service mesh implementations. Containerization Orchestration: Leverage Docker and Kubernetes for containerization and orchestration, with experience in service discovery, auto-scaling, and network policies. Automation Scripting: Automate infrastructure management using advanced scripting and IaC tools such as Terraform, Ansible, Helm Charts, and Python. AWS and Azure Services Expertise: Utilize a broad range of AWS and Azure services, including IAM, EC2, S3, Glacier, VPC, Route53, EBS, EKS, ECS, RDS, Azure Virtual Machines, Azure Blob Storage, Azure Kubernetes Service (AKS), and Azure SQL Database, with a focus on integrating new cloud innovations. Incident Management: Manage incidents related to GitLab pipelines and deployments, perform root cause analysis, and resolve issues to ensure high availability and reliability. Development Processes: Define and optimize development, test, release, update, and support processes for GitLab CI/CD operations, incorporating continuous improvement practices. Architecture Development Participation: Contribute to architecture design and software development activities, ensuring alignment with industry best practices and GitLab capabilities. Strategic Initiatives: Collaborate with the leadership team on process improvements, operational efficiency, and strategic technology initiatives related to GitLab and cloud services. Required Skills Qualifications: Education : Bachelor s or Master s degree in Computer Science, Engineering, or a related field. Experience : 7-9+ years of hands-on experience with GitLab CI/CD, including implementing, configuring, and maintaining pipelines, along with substantial experience in AWS and Azure cloud services.

Posted 1 month ago

Apply

0.0 - 4.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location Bangalore, Karnataka, 560048 Category Engineering Job Type Full time Job Id 1190169 No Cloud Developer This role has been designed as ‘’Onsite’ with an expectation that you will primarily work from an HPE office. Who We Are: Hewlett Packard Enterprise is the global edge-to-cloud company advancing the way people live and work. We help companies connect, protect, analyze, and act on their data and applications wherever they live, from edge to cloud, so they can turn insights into outcomes at the speed required to thrive in today’s complex world. Our culture thrives on finding new and better ways to accelerate what’s next. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. If you are looking to stretch and grow your career our culture will embrace you. Open up opportunities with HPE. Job Description: In the HPE Hybrid Cloud , we lead the innovation agenda and technology roadmap for all of HPE. This includes managing the design, development, and product portfolio of our next-generation cloud platform, Green Lake. Working with customers, we help them reimagine their information technology needs to deliver a simple, consumable solution that helps them drive their business results. Join us redefine what’s next for you. Job Family Definition: The Cloud Developer builds from the ground up to meet the needs of mission-critical applications, and is always looking for innovative approaches to deliver end-to-end technical solutions to solve customer problems. Brings technical thinking to break down complex data and to engineer new ideas and methods for solving, prototyping, designing, and implementing cloud-based solutions. Collaborates with project managers and development partners to ensure effective and efficient delivery, deployment, operation, monitoring, and support of Cloud engagements. The Cloud Developer provides business value expertise to drive the development of innovative service offerings that enrich HPE's Cloud Services portfolio across multiple systems, platforms, and applications. Management Level Definition: Contributions include applying intermediate level of subject matter expertise to solve common technical problems. Acts as an informed team member providing analysis of information and recommendations for appropriate action. Works independently within an established framework and with moderate supervision. What you will do: Designs simple to moderate cloud application features as per specifications. Develops and maintains cloud application modules adhering to security policies. Designs test plans, develops, executes, and automates test cases for assigned portions of the developed code. Deploys code and troubleshoots issues in application modules and deployment environment. Shares and reviews innovative technical ideas with peers, high-level technical contributors, technical writers, and managers. Analyses science, engineering, business, and other data processing problems to develop and implement solutions to complex application problems, system administration issues, or network concerns. What you will need: Bachelor's degree in computer science, engineering, information systems, or closely related quantitative discipline. Master’s desirable. Typically, 2-4 years’ experience. Knowledge and Skills: Strong programming skills in Python or Golang. Expertise in development of micro services and deploying this on Kubernetes environment Understanding and work experience in GitOps, Devops, CD/CD tooling, concepts of package management software, software deployment and life cycle management Experience in architecting software deployments, scripting, deployment tools like chef, puppet ansible, orchestration tools like Terraform Good to have Enterprise Data Center Infrastructure knowledge (Servers, Storage, Networking) Experience with design methodologies, cloud-native applications, developer tools, managed services, and next-generation databases. Good written and verbal communication skills. Ability to quickly learn new skills and technologies and work well with other team members. Understanding DevOps practices like continuous integration/deployment and orchestration with Kubernetes. Additional Skills: Cloud Architectures, Cross Domain Knowledge, Design Thinking, Development Fundamentals, DevOps, Distributed Computing, Microservices Fluency, Full Stack Development, Release Management, Security-First Mindset, User Experience (UX) What We Can Offer You: Health & Wellbeing We strive to provide our team members and their loved ones with a comprehensive suite of benefits that supports their physical, financial and emotional wellbeing. Personal & Professional Development We also invest in your career because the better you are, the better we all are. We have specific programs catered to helping you reach any career goals you have — whether you want to become a knowledge expert in your field or apply your skills to another division. Unconditional Inclusion We are unconditionally inclusive in the way we work and celebrate individual uniqueness. We know varied backgrounds are valued and succeed here. We have the flexibility to manage our work and personal needs. We make bold moves, together, and are a force for good. Let's Stay Connected: Follow @HPECareers on Instagram to see the latest on people, culture and tech at HPE. #india #hybridcloud Job: Engineering Job Level: TCP_02 HPE is an Equal Employment Opportunity/ Veterans/Disabled/LGBT employer. We do not discriminate on the basis of race, gender, or any other protected category, and all decisions we make are made on the basis of qualifications, merit, and business need. Our goal is to be one global team that is representative of our customers, in an inclusive environment where we can continue to innovate and grow together. Please click here: Equal Employment Opportunity. Hewlett Packard Enterprise is EEO Protected Veteran/ Individual with Disabilities. HPE will comply with all applicable laws related to employer use of arrest and conviction records, including laws requiring employers to consider for employment qualified applicants with criminal histories.

Posted 1 month ago

Apply

0.0 - 2.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Location: Bangalore, Karnataka Factspan Overview: Factspan is a pure play analytics organization. We partner with fortune 500 enterprises to build an analytics center of excellence, generating insights and solutions from raw data to solve business challenges, make strategic recommendations and implement new processes that help them succeed. With offices in Seattle, Washington and Bangalore, India; we use a global delivery model to service our customers. Our customers include industry leaders from Retail, Financial Services, Hospitality, and technology sectors. Job Description As an LLM (Large Language Model) Engineer, you will be responsible for designing, optimizing, and standardizing the architecture, codebase, and deployment pipelines of LLM-based systems. Your primary mission will focus on modernizing legacy machine learning codebases (including 40+ models) for a major retail client—enabling consistency, modularity, observability, and readiness for GenAI-driven innovation. You’ll work at the intersection of ML, software engineering, and MLOps to enable seamless experimentation, robust infrastructure, and production-grade performance for language-driven systems. This role requires deep expertise in NLP, transformer-based models, and the evolving ecosystem of LLM operations (LLMOps), along with a hands-on approach to debugging, refactoring, and building unified frameworks for scalable GenAI workloads. Responsibilities: Lead the standardization and modernization of legacy ML codebases by aligning to current LLM architecture best practices. Re-architect code for 40+ legacy ML models, ensuring modularity, documentation, and consistent design patterns. Design and maintain pipelines for fine-tuning, evaluation, and inference of LLMs using Hugging Face, OpenAI, or open- source stacks (e.g., LLaMA, Mistral, Falcon). Build frameworks to operationalize prompt engineering, retrieval-augmented generation (RAG), and few-shot/in- context learning methods. Collaborate with Data Scientists, MLOps Engineers, and Platform teams to implement scalable CI/CD pipelines, feature stores, model registries, and unified experiment tracking. Benchmark model performance, latency, and cost across multiple deployment environments (on-premise, GCP, Azure). Develop governance, access control, and audit logging mechanisms for LLM outputs to ensure data safety and compliance. Mentor engineering teams in code best practices, versioning, and LLM lifecycle maintenance. 2nd Floor, South Block, Vaishnavi Tech Park, Ambalipura, Sarjapur Marathahalli Rd, Bengaluru, Karnataka 560102 info@factspan.com Key Skills: Deep understanding of transformer architectures, tokenization, attention mechanisms, and training/inference optimization Proven track record in standardizing ML systems using OOP design, reusable components, and scalable service APIs Hands-on experience with MLflow, LangChain, Ray, Prefect/Airflow, Docker, K8s, Weights & Biases, and model- serving platforms. Strong grasp of prompt tuning, evaluation metrics, context window management, and hybrid search strategies using vector databases like FAISS, pgvector, or Milvus Proficient in Python (must), with working knowledge of shell scripting, YAML, and JSON schema standardization Experience managing compute, memory, and storage requirements of LLMs across GCP, Azure, or AWS environments Qualifications & Experience: 5+ years in ML/AI engineering with at least 2 years working on LLMs or NLP-heavy systems. Able to reverse-engineer undocumented code and reimagine it with strong documentation and testing in mind. Clear communicator who collaborates well with business, data science, and DevOps teams. Familiar with agile processes, JIRA, GitOps, and confluence-based knowledge sharing. Curious and future-facing—always exploring new techniques and pushing the envelope on GenAI innovation. Passionate about data ethics, responsible AI, and building inclusive systems that scale Why Should You Apply? Grow with Us: Be part of a hyper- growth startup with ample number of opportunities to Learn & Innovate. People: Join hands with the talented, warm, collaborative team and highly accomplished leadership. Buoyant Culture: Regular activities like Fun-Fridays, Sports tournaments, Trekking and you can suggest few more after joining us 2nd Floor, South Block, Vaishnavi Tech Park, Ambalipura, Sarjapur Marathahalli Rd, Bengaluru, Karnataka 560102 info@factspan.com

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 month ago

Apply

17.0 - 20.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview* Developer Experience is a growing department within the Global Technology division of Bank of America. We drive modernization of technology tools and processes and Operational Excellence work across Global Technology. The organization operates in a very dynamic and fast-paced global business environment. As such, we value versatility, creativity, and innovation provided through individual contributors and teams that come from diverse backgrounds and experiences. We believe in an Agile SDLC environment with a strong focus on technical excellence and continuous process improvement. Job Description* We are seeking a strategic and hands-on Principal Engineer to drive the design, modernization, and delivery of secure enterprise-grade applications at scale. In this role, you will shape architectural decision, introduce modern engineering practices, and influence platform and product teams to build secure, scalable, and observable systems. This is a high-impact technical leadership role for a proven engineer passionate about cloud-native architecture, developer experience, and responsible innovation. Responsibilities* Lead architecture, design and development of modern, distributed applications using modern tech stack, frameworks, and cloud-native patterns. Provide hands-on leadership in designing system components, APIs, and integration patterns, ensuring high performance, security, and maintainability. Define and enforce architectural standards, reusable patterns, coding practices and technical governance across engineering teams. Guide the modernization of legacy systems into modern architectures, optimizing for resilience, observability, and scalability. Integrate secure-by-design principles across SDLC through threat modeling, DevSecOps practices, and zero-trust design. Drive engineering effectiveness by enhancing observability, developer metrics and promoting runtime resiliency. Champion the responsible adoption of Generative AI tools to improve development productivity, code quality and automation. Collaborate with product owners, platform teams, and stakeholders to align application design with business goals. Champion DevSecOps, API-first design, and test automation to ensure high-quality and secure software delivery. Evaluate and introduce new tools, frameworks, and design patterns that improve engineering efficiency and platform consistency. Mentor and guide engineers through design reviews, performance tuning and technical deep dives. Requirements* Education* Graduation / Post Graduation : BE/B.Tech/MCA Certifications If Any: NA Experience Range* 17 to 20 Years Foundational Skills* Proven expertise in architecting large-scale distributed system with a strong focus on Java-based cloud-native applications using Spring Boot, Spring Cloud and API-first design; experience defining reference architectures, reusable patterns, and modernization blueprints. Deep hands-on experience with container orchestration platforms like Kubernetes/OpenShift including service mesh, autoscaling, observability and cost-aware architecture. In-depth knowledge of relational and NoSQL data platforms (e.g.: Oracle, PostgreSQL, MongoDB, Redis) including data modeling for microservices, transaction patterns, distributed consistency, caching strategies, and query performance optimization. Expertise in CI/CD pipelines, GitOps and DevSecOps practices for secure, automated application delivery; strong understanding of API lifecycle, runtime resiliency, and multi-environment release strategies. Strong grasp of threat modeling, secure architecture principles, and zero-trust application design with experience in integrating security throughout the software development lifecycle. Demonstrated experience using GenAI tools (e.g.: GitHub Copilot) to enhance the software development lifecycles – prompt engineering for code generation, automated test creation, refactoring, and architectural validation – with a responsible use, prompt design and maximizing engineering efficiency. Desired Skills* Experience modernizing legacy applications to modern cloud native architectures [e.g.: Microservices, Event-Driven etc.] Experience with big data platforms or architectures supporting real-time or large-scale transactional systems would be a big plus. Exposure to AI/ML workflows, including integration with ML APIs, and orchestration of AI-powered features. Demonstrated ability to explore emerging technologies like platform engineering, internal developer tooling and AI-augmented architecture. Work Timings* 11:30 AM to 8:30 PM IST Job Location* Mumbai, Chennai, Hyderabad Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Location: Gurgaon, India (On‑site/Hybrid, Full‑time) Why Join Us? We’re a fast‑growing health‑tech company transforming Revenue Cycle Management (RCM) for hospitals, clinics, and physician groups. Our cloud‑native platform simplifies complex billing and claims workflows so providers can focus on patient care—not paperwork. As a Senior DevOps Engineer, you’ll be the architect behind the highly available, secure, and scalable infrastructure that keeps those mission‑critical systems running smoothly. What You’ll Do Own the Cloud Infrastructure Design and automate Azure environments with Terraform/ARM, delivering self‑service, repeatable deployments Build resilient network topologies and security controls that meet HIPAA & HITRUST standards Tune performance and cost—because every saved rupee goes back into innovation Ship Code Faster & Safer Create end‑to‑end CI/CD pipelines in Jenkins or GitLab that cut release time from hours to minutes Embed automated tests, quality gates, and blue‑green / canary strategies to achieve zero‑downtime releases Containerize microservices with Docker and orchestrate them with Kubernetes Keep the Lights On Roll out observability stacks (Azure Monitor, Log Analytics, Application Insights) with actionable dashboards and alerts Author incident‑response playbooks and join a low‑noise on‑call rotation Conduct regular security scans and vulnerability assessments—security is everyone’s job here Automate Everything Script in Bash, PowerShell, or Python to eliminate toil and empower developers with self‑service tools Advocate for Infrastructure‑as‑Code and GitOps best practices across teams What You Bring 5+ years in DevOps/SRE roles with deep Azure expertise Hands‑on mastery of Terraform or ARM Templates, Docker, Kubernetes, and CI/CD tooling Strong scripting chops (Python, Bash, PowerShell) Solid understanding of networking, IAM, and security hardening Bonus points for: healthcare/RCM experience, Azure certifications (AZ‑400, AZ‑104), database know‑how (SQL Server, MongoDB), and familiarity with microservices and API gateways Soft Skills We Value Relentless problem solver who thrives in high‑stakes production environments Clear communicator—able to translate “yak‑shaving” tech talk into business value for non‑technical stakeholders Collaborative team player who mentors others and welcomes feedback Self‑starter who can juggle multiple priorities and still hit aggressive deadlines Perks & Benefits Comprehensive medical, dental, and vision coverage for you and your family Annual learning budget for conferences, certifications, and courses—grow on our dime Performance bonuses tied to team and company milestones Flexible working hours and generous leave policy Latest MacBook Pro or high‑end Windows laptop—your choice On‑site wellness programs and monthly team‑building events Powered by JazzHR nn4sn5A5AR Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 15 Lacs

Chennai

Work from Office

Infra as Code, CI/CD, system admin, coding, monitoring, security, and cross-team communication. Skills: Docker, K8s, ArgoCD, Ansible, Jenkins, AWS, Linux/macOS, Prometheus, DBs (SQL/NoSQL), Python, Git. Add GitHub/GitLab link in resume.

Posted 1 month ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

We help the world run better Our company culture is focused on helping our employees enable innovation by building breakthroughs together. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. Apply now! Job Title Cloud Automation Engineer Summary And Role Information We’re looking for a Cloud Automation Engineer responsible for requirements analysis, building, and maintaining automated solutions for cloud-based infrastructure. They will work closely with CloudOps teams to ensure that cloud resources are provisioned, managed, and maintained in an efficient and secure manner. 3+ years of experience as a DevOps engineer creating and maintaining cloud environments and their supporting infrastructures. Develop automated solutions for cloud-based infrastructure, provision and configure resources. Proficiency with development languages such as Python, JavaScript, Bash, Java, Go; and a solid understanding of API development and Micro-service architectures Hands-on experience with: Configuration Management Tools such as Ansible, Puppet or Chef Build Systems such as Jenkins (preferred), Bamboo, Travis, CircleCI Infrastructure Orchestration Tools such as CrossPlane (preferred), Terraform Container Orchestration Technologies such as Kubernetes, OpenShift Logging Systems such as Elastic Stack or Splunk Monitoring Systems such as Dynatrace, NewRelic or Appdynamics Experience in deploying applications via ArgoCD, Helm and/or via operators/CRDs. Deep familiarity with cloud infrastructure design patterns and anti-patterns Familiarity in software-defined networking, storage systems architecture, virtualization, containers and serverless technologies. Understanding of security fundamentals at the hypervisor and operating system levels Experience in implementing continuous integration/ continuous deployment (CI/CD) processes to ensure that changes are tested, approved, and deployed in an automated and secure manner. DevOps automation experience with Kubernetes in either/or AWS/ GCP/ Azure. Experience with GitOps pipelines for deploying cloud infrastructure with version control. Knowledge of cloud and application security concepts like encryption (at rest and in transit) and the use of certificates and secrets within an application. Strong understanding of enterprise automation, integration, and modern software engineering practices DevOps mentality with an understanding of Continuous Integration, Continuous Delivery, Monitoring and Observability Proficient in reviews, integration testing, and creating documentation to ensure the reliability of infrastructure deployments. Familiarity with service mesh concepts like Istio Role Requirements Bachelor’s degree or equivalent in a technology related field (e.g., Computer Science, Engineering, etc.) is required. A high degree of intellectual curiosity and a lack of fear of learning something new. You should be open to learn from others and willing to help mentor and teach. Ability to build and maintain effective relationships with technical product managers, architects, and technical leads Hands-on experience with cloud native solutions Experience in Travel and Procurement a plus Excellent communication in English. Value Competencies Displays passion for & responsibility to the internal customer. Displays leadership through innovation in everything you do. Displays a passion for what you do and a drive to improve. Displays a relentless commitment to win. Displays personal & corporate integrity. About The Team CDX SM CloudOps is part of the CTO Organization for the Spend Management organization at SAP. Join us as leaders in spend management and become an influencer in the market with your unique skillset. Our intent is to drive a culture that supports everyone to do the best work of their lives. Make an impact in our mission to help developers safely go faster while optimizing cloud costs. #DevT2 #SAPReturnshipIndiaCareers We build breakthroughs together SAP innovations help more than 400,000 customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with 200 million users and more than 100,000 employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, we build breakthroughs, together. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 391404 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: . Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 month ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Job Description Does an opportunity to build solutions for large scale suit you? Do hybrid local/cloud infrastructures interest you? Join our Engineering team This team is a part of the Cloud Security Intelligence group. Together, we own one of the largest Big Data environments in Israel. The team owns various Intelligence Security products that run as part of this environment. We are also responsible for innovatively developing and maintaining the platform itself. Make a difference in your own way You'll be working on innovating and developing a new and ground-breaking Big Data platform. It provides services for the rest of the Platform and Akamai engineering groups. We strive to accelerate development, reduce operational costs, and provide common secured common services As a Senior DevOps Engineer, you will be responsible for: Designing and implementing infrastructure solutions on top of Azure and Linode - Kubernetes Kafka, vault, storage, etc. Developing and Provisioning infrastructure applications, and monitoring tools e.g. OpenSearch/ELK, OpenTelematry, Prometheus, Grafana, Pushgatway & etc. Building and maintaining CI/CD pipelines using Jenkins, In addition to building GitOps solutions such as ArgoCD Working in all stages of the software release process in all development and production environments Do What You Love To be successful in this role you will: Have 5+ years' experience as a DevOps Engineer and Bachelor's degree in Computer Science or it's equivalent Be proficient in working in Linux/Unix environments, and demonstrate solid experience in Python and shell scripting Have proven experience in designing and implementing solutions for Kubernetes Have experience setting up large-scale container technology (Docker, Kubernetes, etc.) and migrating/creating systems on cloud environments (Azure/AWS/GCP) Be responsible, self-motivated, and able to work with little or no supervision Have attention to detail and excellent troubleshooting skills Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Akamai Technologies is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of gender, gender identity, sexual orientation, race/ethnicity, protected veteran status, disability, or other protected group status. Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies