Jobs
Interviews

944 Gitops Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

delhi

On-site

As a frontend developer at Nomiso India, you will be responsible for building a workflow automation system to simplify existing manual processes. Your role will involve owning lifecycle management, automating platform operations, leading issue resolution, defining compliance standards, integrating various tools, driving observability and performance tuning initiatives, as well as mentoring team members and leading operational best practices. You can expect a stimulating and fun work environment at Nomiso, where innovation and thought leadership are highly valued. We provide opportunities for career growth, idea generation, and innovation at all levels of the company. As a part of our team, you will be encouraged to push your boundaries and fulfill your career aspirations. The core tools and technology stack you will be working with include OpenShift, Kubernetes, GitOps, Ansible, Terraform, Prometheus, Grafana, EFK Stack, Vault, SCCs, RBAC, NetworkPolicies, and more. To qualify for this role, you should have a BE/B.Tech or equivalent degree in Computer Science or a related field. The position is based in Delhi-NCR. Join us at Nomiso India and be a part of a dynamic team that thrives on ideas, innovation, and challenges. Your contributions will be valued, and you will have the opportunity to grow professionally in a fast-paced and exciting environment. Let's work together to simplify complex business problems and empower our customers with effective solutions.,

Posted 1 week ago

Apply

9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Strategic Responsibilities: Architect enterprise-grade solutions using Palantir Foundry and AIP Lead AI application development, including agentic AI for business process automation Own end-to-end solution lifecycle: design → development → deployment → production support Define DevOps and platform engineering standards for Foundry deployments Guide data governance, security, and CI/CD automation across teams Collaborate with global teams to build scalable frameworks and reusable templates Lead environment governance, versioning strategy, and platform upgrade planning. Act as a technical advisor to stakeholders, translating complex requirements into actionable solutions. Drive innovation by integrating emerging AI/ML capabilities into Foundry workflows Requirements 9+ years in software engineering, data architecture, or AI/ML Deep experience with Foundry (Ontology Manager, Pipeline Builder, Code Workbook, Contour) Advanced knowledge of Palantir AIP, GenAI, and LLM integrations Experience managing production environments and observability tools Strong foundation in GitOps, CI/CD automation, and branching strategies Proficiency in Python, Java, TypeScript, or C++ Strong grasp of SQL, Spark, PySpark, and data modeling Familiarity with cloud platforms (AWS, Azure, GCP) and DevOps Excellent leadership, communication, and stakeholder engagement Preferred Qualifications: Palantir certifications (Foundry Basics, Developer Track) Experience mentoring teams and leading agile delivery Knowledge of DeVOps, data lineage, and automated deployments Background in platform engineering, enterprise architecture, or solution consulting

Posted 1 week ago

Apply

3.0 - 5.0 years

6 - 15 Lacs

Pune

Work from Office

We're hiring a Backend Developer (MongoDB, Node.js) for a fast-growing Tech Logistics start-up shaping the future of logistics. You'll build next-gen solutions using modern technologies and support customers across multiple geographies.

Posted 1 week ago

Apply

12.0 years

27 - 35 Lacs

Madurai, Tamil Nadu, India

On-site

Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3–5 years of hands-on experience in GCP Data Service Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles What We Offer Opportunity to work on large-scale data modernization projects with global clients A fast-growing company with a strong tech and people culture Competitive salary, benefits, and flexibility Collaborative environment that values innovation and leadership Skills:- Google Cloud Platform (GCP), GCP Data, Architect and Data architecture

Posted 1 week ago

Apply

8.0 years

0 Lacs

Pune/Pimpri-Chinchwad Area

On-site

Primary Function Cloud Operations Engineer Infrastructure is responsible for leading shift and supporting implementation of core cloud infrastructure components. Utilizes advanced technical skills to coordinate design, enhancement and deployment efforts and provide insight and recommendations for operating enterprise cloud infrastructure solutions. Works closely with cloud application and infrastructure support teams, project managers, network and system engineers, and other technology support teams. Documents critical design and configuration details required to support the delivery of enterprise cloud services. Essential Duties And Responsibilities This list may not include all of the duties that may be assigned. Responsible for reliability and support of Cloud Platform including Public Cloud (Azure /AWS /Google) services. Migration Hands on from on prem to cloud Handon experience of DFS , File servers and File server Migrations Sound knowledge and experience of Windows and Linux OS administration Monitor and troubleshoot Azure/AWS /Google environment performance issues, connectivity issues, security issues, etc. Perform deep dives into systemic and latent reliability issues, incident management, problem management Identifying, analyzing, and resolving infrastructure vulnerabilities and application deployment issues. Perform RCA, partner with engineering and operation teams across the organization to roll out fixes. Identify and drive opportunities to improve automation for the cloud services; scope and create automation for deployment, management, and visibility of our services. Evaluating and automating the scaling and capacity requirements within Azure environments Engage with engineering teams throughout the full lifecycle from design, engineering, deployment, & operations. Partner with risk and compliance teams to bring visibility and implement right controls and policies in the Cloud Platform Ensure resiliency during implementation and identify/fix resiliency problems by collaborating with engineering teams Be a key stakeholder in the design of cloud services and work with Architecture, engineering, product teams Participate in 24x7 on-call coverage follow the sun model Identify the cloud optimization opportunities, design solutions and implement Support deployment templates or patterns as requested by Customer. Automating the deployment of templates into the environment to continually reduce provisioning and deployment times. Manage cloud brokerage and orchestration software to monitor and modify infrastructure solutions to address planned and ad hoc demands for cloud Services. Manage Virtual Networks (VPCs) and Subnets Patch management (System updates assessment and updates) Endpoint Management, Native Load-Balancer, NSG, IP address Management, management of virtual networks in cloud Support DR set up & restore environment after disaster recovery 3rd party vendor coordination for troubleshooting Qualifications EDUCATION: Bachelor’s degree in computer science or Higher in similar field preferred. Required Experience Minimum 8+ years of hands-on experience maintaining cloud platforms on a major cloud service provider. Experience working on Azure/Google/AWS/OCI operations and Administration. Handon experience of DFS , File servers and File server Migrations Azure /Terraform /AWS /Google certifications are a plus Strong experience in implementing, monitoring, and maintaining Microsoft Azure solutions, including major services related to Compute, Storage, Network and Security Experience with monitoring tools such as cloud native tools like Azure Monitor and Log Analytics Understanding of cost management, inventory management, FinOps model Strong understanding and background of working with a complex IAM infrastructure, including Active Directory, Azure AD and other SSO solutions. Advanced knowledge of DNS, DHCP, Kerberos and Windows Authentication Experience with IaC with Terraform Python, Ansible and shell scripting Experience with CI/CD tools such as git andJenkins, familiarity with using a GitOps model Excellent understanding of Linux /Windows operating systems administration Systematic problem-solving approach, sense of ownership and drive Excellent interpersonal, organizational and communication (written, verbal, and presentation) skills are a must. Preferred Experience 6 or more years with Virtualization including Virtualization Server, Storage, Desktop, Network 6 or more years with Infrastructure-Based Processes such as Monitoring, Capacity Planning, Performance Tuning, Asset Management, Disaster Recovery 2 or more years with Hyper-V, Virtual Infrastructure, Platform Sizing Experience in Terraform, Ansible Experience working in a highly available multi-datacenter environment Proven ability to work independently with minimal supervision and as part of a team with direct responsibilities. Ability to juggle competing priorities and adapt to changes in project scope.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Job Title: DevOps Engineer – Databricks Location: Bangalore, India (or Remote, based on role) Employment Type: Full-Time About Brillio: Brillio is a global digital technology consulting and solutions company that helps clients transform their businesses through innovation and agility. With a customer-first culture and a commitment to excellence, Brillio empowers its teams to deliver impactful solutions across data, AI, cloud, and digital platforms. Role Overview: We are looking for a DevOps Engineer with hands-on experience in cloud-native development and a strong background in Databricks . The ideal candidate will be responsible for building and maintaining scalable infrastructure, automating data workflows, and supporting data engineering teams in delivering high-performance solutions. Key Responsibilities: Design and implement CI/CD pipelines for data and analytics applications. Manage Databricks workspaces, clusters, jobs, and notebooks across cloud platforms. Automate infrastructure provisioning using Terraform, Ansible, or similar tools. Collaborate with data engineers and architects to optimize data workflows. Monitor system performance and ensure high availability and reliability. Implement security and compliance best practices across environments. Document infrastructure and deployment processes for internal teams. Required Qualifications: Databricks Certification (Associate or Professional level). 3+ years of experience in DevOps, cloud infrastructure, or data platform engineering. Experience with Databricks on AWS , Azure , or GCP . Proficiency in scripting (Python, Bash) and automation tools. Familiarity with containerization (Docker, Kubernetes). Experience with monitoring tools (Prometheus, Grafana, ELK Stack). Preferred Skills: Knowledge of Apache Spark and Delta Lake. Exposure to ML Ops and model deployment workflows. Experience with GitOps and Infrastructure-as-Code practices. Strong communication and collaboration skills. Why Join Brillio? Work with cutting-edge technologies and global clients. Be part of a culture that values innovation, ownership, and continuous learning. Competitive compensation, benefits, and career growth opportunities. Inclusive and supportive work environment.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: DevOps Engineer Experience: 5–7 Years Location: Pune Job Overview: We are looking for a highly skilled DevOps Engineer with deep expertise in Kubernetes, Helm Charts, GitOps, GitHub, and cloud platforms like AWS. The ideal candidate will have a strong background in CI/CD automation, infrastructure as code, and container orchestration, and will be responsible for managing and improving our deployment pipelines and cloud infrastructure. Key Responsibilities: • Design, implement, and maintain CI/CD pipelines using GitHub Actions or other automation tools. • Manage and optimize Kubernetes clusters for high availability and scalability. • Use Helm Charts to define, install, and upgrade complex Kubernetes applications. • Implement and maintain GitOps workflows (preferably using ArgoCD). • Ensure infrastructure stability, scalability, and security across AWS • Collaborate with development, QA, and infrastructure teams to streamline delivery processes. • Monitor system performance, troubleshoot issues, and ensure reliable deployments. • Automate infrastructure provisioning using tools like Terraform, Pulumi, or ARM templates (optional but preferred). • Maintain clear documentation and enforce best practices in DevOps processes. Key Skills & Qualifications: • 7–9 years of hands-on experience in DevOps • Strong expertise in Kubernetes and managing production-grade clusters. • Experience with Helm and writing custom Helm charts. • In-depth knowledge of GitOps-based deployments (preferably using ArgoCD). • Proficient in using GitHub, including GitHub Actions for CI/CD. • Solid experience with AWS • Familiarity with Infrastructure as Code (IaC) tools (preferably Terraform) • Strong scripting skills (e.g., Bash, Python, or PowerShell) • Understanding of containerization technologies like Docker. • Excellent problem-solving and troubleshooting skills. • Strong communication and collaboration abilities. Nice to Have: • Experience with monitoring tools like Prometheus, Grafana, or ELK stack. • Knowledge of security practices in DevOps and cloud environments. • Certification in AWS is a plus

Posted 1 week ago

Apply

6.0 - 10.0 years

16 - 22 Lacs

Chennai

Remote

Backend Engineer (.NET) with 6+ yrs in C#, .NET Core, cloud (GCP/Azure), Docker/K8s, and CI/CD. Focus on scalable, secure, multi-tenant systems, performance, and observability. Bonus: React, GraphQL, gRPC. Help us build backend systems at scale.

Posted 1 week ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Hyderabad, Telangana, India

On-site

Position: Tech Lead API Platform Team (Azure, Terraform, Kubernetes) Overview: We are seeking a highly skilled Tech Lead to spearhead our API Platform Team. This role demands a profound expertise in Azure services and advanced skills in Terraform and Kubernetes. Key Responsibilities and Requirements: Proven experience as a Tech Lead or Senior role in Azure, Kubernetes and Terraform. Extensive Expert-level knowledge and experience with Azure services such as AKS, APIM, Application Gateway, Front Door, Load Balancers, Azure SQL, Event Hub, Application Insights, ACR, Key Vault, VNet, Prometheous, Grafana, Storage Account, Monitoring, Notification Hub, VMs, DNS and more. Extensive Expert-level knowledge and hands-on experience in design and implement complex Terraform modules for Azure and Kubernetes environments, incorporating various providers such as azurerm, azapi, kubernetes, and helm. Extensive Expert-level knowledge and hands-on experience in deploy and manage Kubernetes clusters (AKS) with a deep understanding of Helm chart writing, Helm deployments, and AKS addons, Application Troubleshooting, monitoring with Prometheus and Grafana, GitOps and more. Lead application troubleshooting, performance tuning, and ensure high availability and resilience of APIs deployed in AKS and exposed internally and externally through APIM. Drive GitOps, APIOps practices for continuous integration and deployment strategies. Strong analytical and problem-solving skills with a keen attention to detail. Excellent leadership skills and ability to take ownership to deliver platform requirements in a faster pace. Qualifications: Bachelors or masters degree in Computer Science, Engineering, or a related field. Certifications in Azure, Kubernetes and Terraform are highly preferred.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Position: DevOps Engineer Location: Ahmedabad (On-site at office) Working Day: 5.5 Working Days Experience: 3 to 7 years of relevant experience Purpose: We are looking for a highly skilled DevOps professional with 3 to 7 years of experience to work with us. The candidate will bring expertise in GCP Platform, containerization & orchestration, SDLC, operating systems, version control, languages, scripting, CI/CD, infrastructure as code, and databases. Experience in the Azure Platform, in addition to the GCP platform, will be highly valued. Experience:  3-7 years of experience in DevOps.  Proven experience in implementing DevOps best practices and driving automation.  Demonstrated ability to manage and optimize cloud-based infrastructure. Roles and Responsibilities: The DevOps professional will be responsible for:  Implementing and managing the GCP Platform, including Google Kubernetes Engine (GKE), CloudBuild and DevOps practices.  Leading efforts in containerization and orchestration using Docker and Kubernetes.  Optimizing and managing the Software Development Lifecycle (SDLC).  Administering Linux and Windows Server environments proficiently.  Managing version control using Git (BitBucket) and GitOps (preferred).  Automating and configuring tasks using YAML and Python.  Developing and maintaining Bash and PowerShell scripts.  Designing and developing CI/CD pipelines using Jenkins and optionally CloudBuild.  Implementing infrastructure as code through Terraform to optimize resource management.  Managing CloudSQL and MySQL databases for reliable performance. Education Qualification  Bachelor’s degree in Computer Science, Engineering, or a related field.  Master’s degree in a relevant field (preferred). Certifications Preferred  Professional certifications in GCP, Kubernetes, Docker, and DevOps methodologies.  Additional certifications in CI/CD tools and infrastructure as code (preferred). Behavioural Skills  Strong problem-solving abilities and keen attention to detail.  Excellent communication and collaboration skills.  Ability to adapt to a fast-paced and dynamic work environment.  Strong leadership and team management capabilities. Technical Skills  Proficiency in Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Expertise in Docker and Kubernetes for containerization and orchestration.  Deep understanding of the Software Development Lifecycle (SDLC).  Proficiency in administering Linux and Windows Server environments.  Experience with Git (BitBucket) and GitOps (preferred).  Proficiency in YAML and Python for automation and configuration.  Skills in Bash and PowerShell scripting.  Strong ability to design and manage CI/CD pipelines using Jenkins and optionally CloudBuild.  Experience with Terraform for infrastructure as code.  Management of CloudSQL and MySQL databases. Non-Negotiable Skills  GCP Platform: Familiarity with Google Kubernetes Engine (GKE), CloudBuild, and DevOps practices.  Experience with Azure  Containerization & Orchestration: Expertise in Docker and Kubernetes.  SDLC: Deep understanding of the Software Development Lifecycle.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

NVIDIA is continuously reinventing itself, with the invention of the GPU sparking the growth of the PC gaming market, redefining modern computer graphics, and revolutionizing parallel computing. In today's world, research in artificial intelligence is thriving globally, demanding highly scalable and massively parallel computation horsepower where NVIDIA GPUs excel. As a learning machine, NVIDIA constantly evolves by embracing new opportunities that are challenging, unique, and impactful to the world. Our mission is to amplify human creativity and intelligence. Join our diverse team and discover how you can make a lasting impact on the world! We are seeking a Senior Software Engineer to contribute to enhancing our HPC infrastructure. You will collaborate with a team of passionate engineers dedicated to developing and managing sophisticated infrastructure for business critical services and AI applications. The ideal candidate will possess expertise in software development, designing reliable distributed systems, and implementing long-term maintenance strategies. **Responsibilities:** - Design highly available and scalable systems for our HPC clusters - Explore new technologies to adapt to the evolving landscape - Enhance infrastructure provisioning and management through automation - Support a globally distributed, multi-cloud hybrid environment (AWS, GCP, and On-prem) - Foster cross-functional relationships and partnerships across business units - Ensure operational excellence, high uptime, and Quality of Service (QoS) for users - Participate in the team's on-call rotation and respond to service incidents **Requirements:** - 5+ years of experience in designing and delivering large engineering projects - Proficiency in at least two of the following programming languages: Golang, Java, C/C++, Scala, Python, Elixir - Understanding of scalability challenges and server-side code performance - Experience with full software development lifecycle and cloud platforms (GCP, AWS, or Azure) - Familiarity with modern CI/CD techniques, GitOps, and Infrastructure as Code (IaC) - Strong problem-solving skills, work ethic, and attention to detail - Bachelor's degree in Computer Science or related technical field (or equivalent experience) - Excellent communication and collaboration skills **Preferred Qualifications:** - Previous experience in developing solutions for HPC clusters using Slurm or Kubernetes - Strong knowledge of Linux operating system and TCP/IP fundamentals Join us in our mission to innovate and elevate the capabilities of our HPC infrastructure. Be part of a dynamic team that is committed to pushing boundaries and achieving operational excellence. Make your mark at NVIDIA and contribute to shaping the future of technology. *(JR1983750)*,

Posted 1 week ago

Apply

1.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As an Associate Manager - Data IntegrationOps, you will play a crucial role in supporting and managing data integration and operations programs within our data organization. Your responsibilities will involve maintaining and optimizing data integration workflows, ensuring data reliability, and supporting operational excellence. To succeed in this position, you will need a solid understanding of enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support. Your primary duties will include assisting in the management of Data IntegrationOps programs, aligning them with business objectives, data governance standards, and enterprise data strategies. You will also be involved in monitoring and enhancing data integration platforms through real-time monitoring, automated alerting, and self-healing capabilities to improve uptime and system performance. Additionally, you will help develop and enforce data integration governance models, operational frameworks, and execution roadmaps to ensure smooth data delivery across the organization. Collaboration with cross-functional teams will be essential to optimize data movement across cloud and on-premises platforms, ensuring data availability, accuracy, and security. You will also contribute to promoting a data-first culture by aligning with PepsiCo's Data & Analytics program and supporting global data engineering efforts across sectors. Continuous improvement initiatives will be part of your responsibilities to enhance the reliability, scalability, and efficiency of data integration processes. Furthermore, you will be involved in supporting data pipelines using ETL/ELT tools such as Informatica IICS, PowerCenter, DDH, SAP BW, and Azure Data Factory under the guidance of senior team members. Developing API-driven data integration solutions using REST APIs and Kafka, deploying and managing cloud-based data platforms like Azure Data Services, AWS Redshift, and Snowflake, and participating in implementing DevOps practices using tools like Terraform, GitOps, Kubernetes, and Jenkins will also be part of your role. Your qualifications should include at least 9 years of technology work experience in a large-scale, global organization, preferably in the CPG (Consumer Packaged Goods) industry. You should also have 4+ years of experience in Data Integration, Data Operations, and Analytics, as well as experience working in cross-functional IT organizations. Leadership/management experience supporting technical teams and hands-on experience in monitoring and supporting SAP BW processes are also required qualifications for this role. In summary, as an Associate Manager - Data IntegrationOps, you will be responsible for supporting and managing data integration and operations programs, collaborating with cross-functional teams, and ensuring the efficiency and reliability of data integration processes. Your expertise in enterprise data integration, ETL/ELT automation, cloud-based platforms, and operational support will be key to your success in this role.,

Posted 1 week ago

Apply

0.0 years

0 Lacs

Chennai, Tamil Nadu

On-site

Location Chennai, Tamil Nadu, India Job ID R-232129 Date posted 22/07/2025 Job Title: Senior Consultant - DevOps Career Level: D2 Introduction to role: Are you ready to make a difference in the world of scientific research? As a Senior Consultant - DevOps Engineer within the ELN Product Team, you'll be at the forefront of developing, validating, integrating, and maintaining Electronic Laboratory Notebooks (ELNs). Your mission will be to design and implement seamless integrations between various applications and the ELN, ensuring efficient data flow and communication. Join us in transforming our ability to develop life-changing medicines! Accountabilities: In this pivotal role, you'll ensure the smooth operation and optimization of ELN systems within our scientific research environment. Collaborate with scientific teams to enhance ELN configurations, fostering collaboration and data sharing. Set up monitoring tools for proactive issue resolution, minimizing disruptions in research workflows. Implement ELN-specific security practices to protect sensitive data and ensure regulatory compliance. Work closely with researchers to understand ELN requirements and optimize integrations with lab tools. Create and maintain comprehensive documentation for ELN configurations and integrations. Embrace a pragmatic, hands-on mentality while identifying knowledge gaps and raising them. Essential Skills/Experience: Experience with process tools like GIT, JIRA, Confluence and CI/CD tools. Experience with Kubernetes, GitOps, Infrastructure as code, monitoring/observability concepts and related tools. Experience in building applications incorporating cloud (AWS), API’s, microservices, containerisation and serverless architectures. Experience of delivering and supporting software in a DevOps environment. Exposure to analytics tools like Power BI (Business Intelligence). Excellent problem solving and adaptability. Willing to work in cross-cultural environment across multiple time zones. Ability to work effectively independently or as part of a team to achieve objectives. Eager to learn and develop new tech skills, as required. Good written and verbal skills, fluent English. Advanced experience with ELN platforms and their technical integration into biopharmaceutical laboratory workflows. Expertise in handling scientific data formats, laboratory instrument integrations, and industry-specific technical compliance standards. Familiarity with advanced GxP regulations and practices. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, we leverage technology to impact patients and ultimately save lives. Our global organization is driven by purpose, pushing the boundaries of science to discover and develop life-changing medicines. We take pride in working close to the cause, unlocking potential to make a massive difference in the world. With cutting-edge science combined with leading digital technology platforms, we empower our business to perform at its peak. Ready to join us on this exciting journey? Apply now and be part of a team that dares to innovate! Date Posted 23-Jul-2025 Closing Date AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

As a Junior Engineer (Enterprise Automation and Orchestration) at Rainfall Inc., you will be part of a dynamic team located in Pune with the flexibility to work from home. You will be responsible for automating infrastructure and application deployment using various tools and platforms, with a focus on Typescript, Javascript, Bash, and Docker Containerization. Your role will require a deep understanding of DevOps practices, AWS, Infrastructure as Code, cloud services, and Docker container creation pipelines. Collaboration with the Development Team, Platform Support team, and Operational teams is essential to ensure smooth deployment and operation of applications. Debugging and troubleshooting issues reported by internal teams or end-users will also be part of your responsibilities. Additionally, you must possess excellent troubleshooting and problem-solving skills, proficiency with GitHub for version control, experience with microservices architecture, and strong documentation skills. A bachelor's degree in Engineering or similar field is required, along with 2+ years of hands-on experience with automation and Infrastructure as Code tools. Your ability to design, implement, and deploy GitHub/N8N workflows will be crucial in this role. If you are a talented and experienced Engineer with a passion for automation and orchestration, we encourage you to apply for this exciting opportunity at Rainfall Inc.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

Candescent is the largest non-core digital banking provider, bringing together transformative technologies that power and connect account opening, digital banking, and branch solutions for banks and credit unions of all sizes on any core. Candescent solutions are trusted by banks and credit unions of all sizes and power the top three U.S. mobile banking apps. The company offers an extensive portfolio of industry-leading products and services with an ecosystem of out-of-the-box and integrated partner solutions. With an API-first architecture and developer tools, financial institutions can optimize and expand their capabilities by seamlessly integrating custom-built or third-party solutions. Candescent's connected in-person, remote, and digital experiences reinvent customer service across all channels. Financial institutions using Candescent's solutions have self-service configuration and marketing tools to control branding, targeted messaging, and user experience. Data-driven analytics and reporting tools provide valuable insights to drive growth and profitability. Clients receive expert, end-to-end support for conversions, implementations, custom development, and customer care. Candescent is looking for a SW Dev Ops Engineer II - Cloud Platform with 4-6 years of experience to join their team in Bangalore(Ecospace). As a senior engineer in the organization's cloud, you will play a vital role in shaping the future of customer interactions with money. The primary focus of the Cloud Engineering team in the digital banking domain is on enhancing the reliability and performance of the Digital First banking platform. As a Site Reliability Engineer (SRE) on the Cloud Platform team, you will implement and enforce robust standards and practices to ensure the security, availability, and reliability of services. You will provide guidance, tooling, and best practices to development teams, collaborate with Product Development and Production Operations, and deploy and support Digital Banking SaaS offerings in the cloud. Responsibilities include providing technical leadership, insight, and guidance, building and supporting the Cloud Platform in GCP, maintaining CI/CD Pipelines using GitOps principles, contributing to operational automation and self-service frameworks, driving continuous adoption and improvement of SRE methodology, managing projects, and collaborating with various teams to deliver a world-class cloud platform. The required skills/experience for this role include 4+ years of GCP cloud experience, expertise in Kubernetes, cloud networking, GitOps processes, working with DevOps/SRE & Agile methodologies, IAC technologies (especially Terraform), experience with cloud migrations, and a degree in Computer Science or related field. Desired skill sets include Docker, Kubernetes, Google Cloud Platform, cloud migrations, IAC (Terraform), Python scripting, CI/CD (GitHub Actions), version control (GIT), cloud networking, GitOps (ArgoCD), Nginx, and experience with Prometheus/Dynatrace or other logging tools. Candidates must have high initiative, be clear communicators, and must pass screening criteria applicable to the job. Candescent only accepts resumes from agencies on the preferred supplier list and is not responsible for unsolicited resumes forwarded to their applicant tracking system or employees.,

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job role : Platform Engineer Experience : 5-7 years Location : Pune Key Responsibilities Design, implement, and manage scalable CI/CD pipelines using Jenkins Develop and manage infrastructure as code with Terraform Work extensively with Google Cloud Platform (GCP) services such as GKE, GCE, BigQuery, and Pub/Sub Manage containerization using Docker and deployment orchestration with Helm Monitor, troubleshoot, and optimize production systems for performance and reliability Drive GitOps methodologies and implement cloud security best practices Collaborate cross-functionally to ensure infrastructure efficiency and scalability Key Requirements 5+ years of hands-on experience in DevOps, cloud platforms, and CI/CD pipelines Strong expertise in GCP, Jenkins, Terraform, Docker, and Kubernetes (GKE) Experience with monitoring tools and infrastructure observability Knowledge of cloud security practices and GitOps workflows Ability to work independently and manage flexible work shifts Strong analytical and troubleshooting skills (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job role : Devops Engineer Experience : 5-10 years Location : Gurgaon We are looking for a highly skilled and experienced Platform & DevOps Engineer to join our team. The ideal candidate will be responsible for managing and supporting DevOps tools, ensuring smooth CI/CD pipeline implementation, and maintaining infrastructure on Google Cloud Platform (GCP). This role requires expertise in Jenkins, Terraform, Docker, Kubernetes (GKE), and security best practices. Experience in the banking industry is a plus. The candidate should be able to work independently, troubleshoot production issues efficiently, and be flexible with work shifts. Key Responsibilities Design, implement, and maintain CI/CD pipelines using Jenkins and other DevOps tools. Manage and support Terraform-based infrastructure as code (IaC) for scalable deployments. Work with GCP products such as GCE, GKE, BigQuery, Pub/Sub, Monitoring, and Alerting. Collaborate with development and operations teams to enhance integration and deployment processes. Build and manage container images using Packer and Docker, ensuring efficient image rotation strategies. Monitor systems, respond to alerts, and troubleshoot production issues promptly. Ensure infrastructure security, compliance, and best practices are maintained. Provide technical guidance to development teams on DevOps tools and processes. Implement and support GitOps best practices, including repository configurations like code owners and webhooks. Document processes, configurations, and best practices for operational efficiency. Stay updated with the latest DevOps technologies and trends, continuously improving existing practices. Required Skills & Qualifications Proficiency in scripting and automation using Bash, Python, or Groovy. Hands-on experience with Jenkins, Terraform, and GCP infrastructure management. Strong knowledge of containerization (Docker) and orchestration tools like Kubernetes (GKE) and Helm. Familiarity with disaster recovery, backups, and troubleshooting production issues. Solid understanding of infrastructure security, compliance, and monitoring best practices. Experience with image creation and management using Packer and Docker. Prior exposure to banking industry processes and regulations is an advantage. Excellent problem-solving, communication, and teamwork skills. Ability to work independently and handle multiple priorities in a fast-paced environment. (ref:hirist.tech)

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

We are searching for a highly skilled Python Automation Engineer with expertise in 5G networks and IMS (IP Multimedia Subsystem) testing to become a part of our dynamic team. The perfect candidate will have practical experience in automating test scripts, telecom protocol testing, and ensuring the quality, performance, and resiliency of 5G/4G network functions. As a Python Automation Engineer, you will have a significant role in the validation of IMS architecture, automation of test cases, and performance verification of network functions. Your responsibilities will include developing and maintaining automated test scripts utilizing Python, TCL, and BDD to validate IMS functionalities and 5G/4G network features. You will design, execute, and enhance test cases for telecom protocols such as SIP, Diameter, MAP, SBI, RTP, and DNS. Additionally, you will be performing integration, performance, and resiliency testing of IMS applications and 3GPP nodes like HLR, HSS, UDM, SMSC, PGW, SMF, and MME. Building and maintaining automation frameworks to streamline test execution and reporting, conducting manual and automated testing for IMS-based services such as VoLTE, VoWiFi, and RCS, simulating network traffic, and testing performance under various conditions using tools like IxLoad, Landslide, and EXFO are crucial aspects of the role. Furthermore, you will be responsible for debugging and troubleshooting issues at both software and network levels using logs, stats, Grafana, and protocol analyzers like Wireshark. Your collaboration with software engineers, network engineers, and DevOps teams to resolve issues and enhance system performance is essential. Integrating automated tests into CI/CD pipelines using Kubernetes, Flux/GitOps, testing system resiliency, geo-redundancy, hardware failure scenarios, and conducting chaos testing will also be part of your duties. To excel in this role, you must possess strong programming skills in Python, along with experience in TCL and BDD. A solid understanding of IMS and its components (HSS, PCRF, CSCF), practical experience with 5G/4G networks and 3GPP nodes, deep knowledge of telecom protocols (SIP, Diameter, MAP, SBI), experience with automation tools and frameworks like Robot Framework, Selenium, or pytest, proficiency with network simulators and analyzers, hands-on experience with Kubernetes, GitOps (Flux), and CI/CD pipelines, strong networking knowledge, experience in performance/load/stress testing and debugging at scale, familiarity with monitoring tools and bug tracking tools, understanding of telecom standards and IMS-based services, as well as excellent communication skills and ability to work in a collaborative environment, are crucial. In return, we offer a competitive salary and benefits package, a culture focused on talent development, opportunities to work with cutting-edge technologies, employee engagement initiatives, annual health check-ups, and insurance coverage for self, spouse, children, and parents. Persistent Ltd. is committed to fostering diversity and inclusion in the workplace, providing hybrid work options and flexible working hours, and creating an inclusive environment where all employees can thrive. Join us at Persistent and unleash your full potential. Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.,

Posted 1 week ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Zenoti provides an all-in-one, cloud-based software solution for the beauty and wellness industry. Our solution allows users to seamlessly manage every aspect of the business in a comprehensive mobile solution: online appointment bookings, POS, CRM, employee management, inventory management, built-in marketing programs and more. Zenoti helps clients streamline their systems and reduce costs, while simultaneously improving customer retention and spending. Our platform is engineered for reliability and scale and harnesses the power of enterprise-level technology for businesses of all sizes Zenoti powers more than 30,000 salons, spas, medspas and fitness studios in over 50 countries. This includes a vast portfolio of global brands, such as European Wax Center, Hand & Stone, Massage Heights, Rush Hair & Beauty, Sono Bello, Profile by Sanford, Hair Cuttery, CorePower Yoga and TONI&GUY. Our recent accomplishments include surpassing a $1 billion unicorn valuation, being named Next Tech Titan by GeekWire, raising an $80 million investment from TPG, ranking as the 316th fastest-growing company in North America on Deloitte’s 2020 Technology Fast 500™. We are also proud to be recognized as a Great Place to Work CertifiedTM for 2021-2022 as this reaffirms our commitment to empowering people to feel good and find their greatness. To learn more about Zenoti visit: https://www.zenoti.com What will I be doing? Lead by example as a hands-on DevOps Manager, actively participating in technical implementation while setting strategic direction Establish and maintain DevOps best practices, standards, and frameworks across the organization with practical, implementable solutions Architect and personally contribute to Infrastructure as Code (IaC) solutions using Terraform for multi-tenant environments Perform regular code reviews and pair programming sessions to elevate team capabilities and ensure quality Troubleshoot complex production issues alongside the team, providing technical guidance and mentorship in real-time Create and maintain security-first deployment strategies and disaster recovery procedures, including regular testing and validation Drive the adoption of modern DevOps practices by implementing working prototypes and proof-of-concepts Optimize cloud costs through hands-on analysis and implementation of resource utilization strategies Serve as both technical mentor and DevOps evangelist, translating industry best practices into actionable implementations What skills do I need? 10+ years of overall experience in Software Engineering with 5+ years of hands-on DevOps experience and 2+ years leading technical teams Demonstrated technical proficiency in Terraform with examples of complex infrastructure implementations Proven experience building and managing infrastructure in Azure or AWS cloud platforms (preferably Azure with understanding on AWS) Advanced scripting abilities in Python and PowerShell with a portfolio of automation solutions Practical experience implementing and maintaining CI/CD pipelines in production environments ideally Azure DevOps and Jenkins Hands-on expertise with Kubernetes cluster management, including troubleshooting and optimization Experience mentoring junior engineers through direct technical collaboration and knowledge sharing Ability to balance strategic thinking with tactical execution in fast-paced environments Nice To Have Active Azure or AWS certifications demonstrating current technical knowledge Experience implementing cost optimization strategies that delivered measurable savings Practical implementation of GitOps workflows in production environments Track record of improving team performance through technical coaching and mentorship Experience building and deploying microservices architectures at scale Experience on Team City Zenoti provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state, or local laws. This policy applies to all terms and conditions of employment, including recruiting, hiring, placement, promotion, termination, layoff, recall, transfer, leaves of absence, compensation and training.

Posted 1 week ago

Apply

35.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Why Choose Bottomline? Are you ready to transform the way businesses pay and get paid? Bottomline is a global leader in business payments and cash management, with over 35 years of experience and moving more than $16 trillion in payments annually. We're looking for passionate individuals to join our team and help drive impactful results for our customers. If you're dedicated to delighting customers and promoting growth and innovation - we want you on our team! Who Are We? Bottomline is on a mission to be the world's leading business payments company, aligning our team to the common purpose of transforming the way businesses pay and get paid. It is a journey that goes around the world serving financial institutions and companies in more than 90 countries. Our offices across APAC are conveniently positioned to optimize our global reach. Sydney, Singapore, Bangalore, and Mumbai have state-of-the-art flexible workspaces, which truly reflect our energetic, innovative culture and mission to push the boundaries of the business payments space. Culture and Values We are one global team, who work with and for each other in a drive to delight customers through excellent execution, which fuels how we create and grow sustained business value for our customers, our team and all who partner with us. Our culture encourages people to be brave and curious, to drive to closure and to ensure our principles are lived out daily. We excel at Bottomline because we are positive and passionate about building a Role We are looking for an awesome Devops Engineer! As part of the leading Banking Enterprise , you will work on major projects that are strategic for our customers which will allow you to develop Strong Technical and professional skills. We are also on the complete transformation and migration to new GitLab CI/CD pipelines, deployment using HELM charts on Kubernetes. How you will contribute: Implement orchestration solution using tools like K8s, ArgoCD, HelmCharts. Create and Automate CI/CD Gitlab for new applications and Jenkins Pipelines to automate existing application. Deploy and maintain infrastructure automation and configuration management tools like Terraform. Support, manage, improve, and upgrade a continuous deployment environment. Implement and improve monitoring and alerting solutions. What will make you successful: Degree in a Computer Science 2-4 years of previous DevOps experience Experience in Docker & Kubernetes (Helm) Experience with GitOps methodology and tools (Flux or ArgoCD) Experience with Configuration Management and infrastructure as code platforms (Puppet/Ansible/Terraform) Experience with implementing and maintaining CI/CD pipelines (Jenkins or GitLab) Experience with Elastic Stack or OpenSearch Stack and ELK. Experience with Prometheus + Grafana Programming skills and knowledge of at least one language (Bash, Python, Java or similar) Experience with Linux Proficient with Git and Git workflow Ability to work effectively within a team and with minimal supervision. Excellent communication (written and oral), documentation and organizational skills Background knowledge in Databases (Oracle & PostgreSQL) Experience with: Kafka, Spring boot, AWS - advantage Bottomline is at the forefront of digital transformation. We are a growing global market leader uniquely equipped to address the changing needs of how businesses pay and get paid. Our culture of working with and for each other enables us to delight our customers. We empower our teams to think like owners driving customer delight, helping them grow their business and win in their markets. Start your . We welcome talent at all career stages and are dedicated to understanding and supporting additional needs. We're proud to be an equal opportunity employer, committed to creating an inclusive and open environment for everyone.

Posted 1 week ago

Apply

0 years

0 Lacs

Delhi, India

On-site

About The Role As a Data Engineer in the Edge of Technology Center, you will play a critical role in designing and implementing scalable data infrastructure to power advanced analytics, AI/ML, and business intelligence. This position demands a hands-on technologist who can architect reliable pipelines, manage real-time event streams, and ensure smooth data operations across cloud-native environments. You will work closely with cross functional teams to enable data-driven decision- making and innovation across the organization. Key Responsibilities Design, implement, and maintain robust ETL/ELT pipelines using tools like Argo Workflows or Apache Airflow. Manage and execute database schema changes with Alembic or Liquibase, ensuring data consistency. Configure and optimize distributed query engines like Trino and AWS Athena for analytics. Deploy and manage containerized workloads on AWS EKS or GCP GKE using Docker, Helmfile, and Argo CD. Build data lakes/warehouses on AWS S3 and implement performant storage using Apache Iceberg. Use Terraform and other IaC tools to automate cloud infrastructure provisioning securely. Develop CI/CD pipelines with GitHub Actions to support rapid and reliable deployments. Architect and maintain Kafka-based real-time event-driven systems using Apicurio and AVRO. Collaborate with product, analytics, and engineering teams to define and deliver data solutions. Monitor and troubleshoot data systems for performance and reliability issues using observability tools (e.g., Prometheus, Grafana). Document data flows and maintain technical documentation to support scalability and knowledge sharing. Key Deliverables Fully operational ETL/ELT pipelines supporting high-volume, low-latency data processing. Zero-downtime schema migrations with consistent performance across environments. Distributed query engines tuned for large-scale analytics with minimal response time. Reliable containerized deployments in Kubernetes using GitOps methodologies. Kafka-based real-time data ingestion pipelines with consistent schema validation. Infrastructure deployed and maintained as code using Terraform and version control. Automated CI/CD processes ensuring fast, high-quality code releases. Cross-functional project delivery aligned with business requirements. Well-maintained monitoring dashboards and alerting for proactive issue resolution. Internal documentation and runbooks for operational continuity and scalability. Qualifications Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field from a recognized institution. Technical Skills Orchestration Tools: Argo Workflows, Apache Airflow Database Migration: Alembic, Liquibase SQL Engines: Trino, AWS Athena Containers & Orchestration: Docker, AWS EKS, GCP GKE Data Storage: AWS S3, Apache Iceberg Relational Databases: Postgres, MySQL, Aurora Infrastructure Automation: Terraform (or equivalent IaC tools) CI/CD: GitHub Actions or similar GitOps Tools: Argo CD, Helmfile Event Streaming: Kafka, Apicurio, AVRO Languages: Python, Bash Monitoring: Prometheus, Grafana (preferred) Soft Skills Strong analytical and problem-solving capabilities in complex technical environments. Excellent written and verbal communication skills to interact with both technical and non- technical stakeholders. Self-motivated, detail-oriented, and proactive in identifying improvement opportunities. Team player with a collaborative approach and eagerness to mentor junior team members. High adaptability to new technologies and dynamic business needs. Effective project management and time prioritization. Strong documentation skills for maintaining system clarity. Ability to translate business problems into data solutions efficiently. Benefits Competitive salary and benefits package in a globally operating company. Opportunities for professional growth and involvement in diverse projects. Dynamic and collaborative work environment Why You'll Love Working With Us Encardio offers a thriving environment where innovation and collaboration are essential. You'll be part of a diverse team shaping the future of infrastructure globally. Your work will directly contribute to some of the world's most ambitious and ground-breaking engineering projects. Encardio is an equal-opportunity employer committed to diversity and inclusion. How To Apply Please submit your CV and cover letter outlining your suitability for the role at humanresources@encardio.com

Posted 1 week ago

Apply

8.0 years

5 - 10 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do: We are looking for a Senior Software Engineer – Java to join and strengthen the App2App Integration team within SAP Business Data Cloud. This role is designed to accelerate the integration of SAP’s application ecosystem with its unified data fabric, enabling low-latency, secure and scalable data exchange. You will take ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive the evolution of SAP’s App2App integration capabilities, with hands-on involvement in Java, ETL and distributed data processing, Apache Kafka, DevOps, SAP BTP and Hyperscaler platforms. Responsibilities: Design and develop App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Integrate data engineering workflows with tools such as Databricks, Spark or other cloud-based processing platforms (experience with Databricks is a strong advantage). Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture What you bring: Bachelor’s or Master’s degree in Computer Science, Software Engineering or a related field. 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones Meet your Team: SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platforms #DevT3 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 426958 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru

On-site

We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. What you'll do We are seeking a Software Engineer – Java for App2App Integration to join the Business Data Cloud Foundation Services team. This role focuses on building robust, scalable integration mechanisms between SAP’s business applications and the data fabric, enabling seamless data movement and real-time interoperability across systems. You'll contribute to the end-to-end development of services and pipelines supporting distributed data processing, data transformations and intelligent automation. This is a unique opportunity to contribute to SAP’s evolving data platform initiatives with hands-on experience in Java, Python, Kafka, DevOps, BTP and Hyperscaler ecosystems. Responsibilities: Develop App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Contribute to the platform's reliability, scalability and security, implementing automated testing, logging and telemetry. Support cloud-native deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Engage in SAP’s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture. What you bring Bachelor’s or Master’s degree in Computer Science, Software Engineering or a related field. 5+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Proven experience with Apache Kafka or similar messaging systems in distributed environments. Experience with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Familiarity with CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones Meet your Team SAP is the market leader in enterprise application software, helping companies of all sizes and industries run at their best. As part of the Business Data Cloud (BDC) organization, the Foundation Services team is pivotal to SAP’s Data & AI strategy, delivering next-generation data experiences that power intelligence across the enterprise. Located in Bangalore, India, our team drives cutting-edge engineering efforts in a collaborative, inclusive and high-impact environment, enabling innovation and integration across SAP’s data platforms Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 426963 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.

Posted 1 week ago

Apply

10.0 years

3 Lacs

Bengaluru

Remote

Company Description Our Mission At Palo Alto Networks® everything starts and ends with our mission: Being the cybersecurity partner of choice, protecting our digital way of life. Our vision is a world where each day is safer and more secure than the one before. We are a company built on the foundation of challenging and disrupting the way things are done, and we’re looking for innovators who are as committed to shaping the future of cybersecurity as we are. Who We Are We take our mission of protecting the digital way of life seriously. We are relentless in protecting our customers and we believe that the unique ideas of every member of our team contributes to our collective success. Our values were crowdsourced by employees and are brought to life through each of us everyday - from disruptive innovation and collaboration, to execution. From showing up for each other with integrity to creating an environment where we all feel included. As a member of our team, you will be shaping the future of cybersecurity. We work fast, value ongoing learning, and we respect each employee as a unique individual. Knowing we all have different needs, our development and personal wellbeing programs are designed to give you choice in how you are supported. This includes our FLEXBenefits wellbeing spending account with over 1,000 eligible items selected by employees, our mental and financial health resources, and our personalized learning opportunities - just to name a few! At Palo Alto Networks, we believe in the power of collaboration and value in-person interactions. This is why our employees generally work full time from our office with flexibility offered where needed. This setup fosters casual conversations, problem-solving, and trusted relationships. Our goal is to create an environment where we all win with precision. Job Description Your Career We’re looking for a Principal Engineer to join the Cortex SRE Production team at our India Development Center (IDC). This is a high-impact role where you’ll collaborate with global SRE and DevOps teams to enhance visibility, reliability, and automation across our production infrastructure via tooling and platform development. Your Impact As a Principal Engineer in the Global SRE Automation group, you will shape the future of infrastructure reliability, scale, and developer productivity. You will lead the design and development of cloud-native automation tools, streamline operational workflows, and embed resilience into every layer of the platform. You will: Architect and build automation systems that support self-healing, observability, and service-level assurance Contribute to the developer experience and internal tooling ecosystem, driving reliability through code Influence the SRE strategy by introducing innovations in cloud-native backend services, Kubernetes automation, and platform engineering Partner with global teams to deliver reliable infrastructure, integrating AI models, event-driven systems, and data pipelines to unlock operational insights Set standards for code quality, system design, and operational excellence across the organization Qualifications Your Experience 10+ years of experience in Cloud Engineering, DevOps, or Infrastructure Software Development, with a strong focus on automation, reliability, and platform scalability Deep expertise in AWS and Google Cloud Platform (GCP), with strong understanding of networking, compute, serverless, and cost-optimization services Proficient in Python or Go, with a solid grasp of modern backend development frameworks (e.g., Flask, FastAPI, Gin) and cloud-native application design Hands-on experience building RESTful APIs, microservices, and cloud-native platforms supporting high availability and self-service Designed and integrated Generative AI and LLM-based pipelines, including Retrieval-Augmented Generation (RAG), into internal tooling and operational systems to enhance developer productivity and incident response Applied predictive analytics, anomaly detection, and MLOps for use cases such as cost forecasting, capacity planning, and proactive incident management Built and optimized Cloud FinOps tooling to monitor usage patterns, reduce waste, and provide actionable insights into cloud spend Developed AI-driven automation agents (bots) for cloud operations, alert triage, knowledge retrieval, and ticket deflection Strong experience with: Infrastructure-as-Code: Terraform, CDK Kubernetes: Cluster lifecycle management, Helm/Kustomize, GitOps (ArgoCD) CI/CD pipelines, observability frameworks (Prometheus, Grafana, ELK), and SRE tooling for incident automation Proficient in SQL and NoSQL databases, such as PostgreSQL and Elasticsearch Exposure to Kafka and event-driven architectures for real-time data streaming and integration. Excellent problem-solving, debugging, and systems design skills Demonstrated leadership in cross-functional engineering teams, including mentoring, architectural guidance, and influencing long-term platform direction Additional Information The Team To stay ahead of the curve, it’s critical to know where the curve is, and how to anticipate the changes we’re facing. For the fastest-growing cybersecurity company, the curve is the evolution of cyberattacks and access technology and the products and services that dedicatedly address them. Our engineering team is at the core of our products – connected directly to the mission of preventing cyberattacks and enabling secure access to all on-prem and cloud applications. They are constantly innovating – challenging the way we, and the industry, think about Access and security. These engineers aren’t shy about building products to solve problems no one has pursued before. They define the industry, instead of waiting for directions. We need individuals who feel comfortable in ambiguity, excited by the prospect of challenge, and empowered by the unknown risks facing our everyday lives that are only enabled by a secure digital environment. Our engineering team is provided with an unrivaled chance to create the products and practices that will support our company growth over the next decade, defining the cybersecurity industry as we know it. If you see the potential of how incredible people and products can transform a business, this is the team for you. If the prospect of affecting tens of millions of people, enabling them to work remotely securely and easily in ways never done before, thrill you - you belong with us. Our Commitment We’re problem solvers that take risks and challenge cybersecurity’s status quo. It’s simple: we can’t accomplish our mission without diverse teams innovating, together. We are committed to providing reasonable accommodations for all qualified individuals with a disability. If you require assistance or accommodation due to a disability or special need, please contact us at accommodations@paloaltonetworks.com. Palo Alto Networks is an equal opportunity employer. We celebrate diversity in our workplace, and all qualified applicants will receive consideration for employment without regard to age, ancestry, color, family or medical care leave, gender identity or expression, genetic information, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran status, race, religion, sex (including pregnancy), sexual orientation, or other legally protected characteristics. All your information will be kept confidential according to EEO guidelines. Is role eligible for Immigration Sponsorship? No. Please note that we will not sponsor applicants for work visas for this position.

Posted 1 week ago

Apply

15.0 years

6 - 8 Lacs

Bengaluru

On-site

Company Description Bosch Global Software Technologies Private Limited is a 100% owned subsidiary of Robert Bosch GmbH, one of the world's leading global supplier of technology and services, offering end-to-end Engineering, IT and Business Solutions. With over 28,200+ associates, it’s the largest software development center of Bosch, outside Germany, indicating that it is the Technology Powerhouse of Bosch in India with a global footprint and presence in the US, Europe and the Asia Pacific region. Job Description Job Summary - Bosch Research is seeking a highly accomplished and technically authoritative Software Expert in AI/ML Architecture to define, evolve, and lead the technical foundations of enterprise-grade, AI-driven systems. This is a technical leadership role without people management responsibilities , intended for professionals with deep expertise in software architecture , AI/ML systems , and large-scale engineering applications and their end-to-end deliveries. You will own the architecture and technical delivery of complex software solutions—ensuring they are robust, scalable, and capable of serving diverse business domains and datasets. The ideal candidate demonstrates mastery in cloud-native engineering , MLOps , Azure ML , and the integration of AI Algorithms (Computer Vision, Text, Timeseries, ML, etc.), LLMs , Agentic AI , and other advanced AI capabilities into secure and high-performing software environments Roles & Responsibilities: Technical Architecture and Solution Ownership Define, evolve, and drive software architecture for AI-centric platforms across industrial and enterprise use cases. Architect for scalability, security, availability, and multi-domain adaptability , accommodating diverse data modalities and system constraints. Embed non-functional requirements (NFRs) —latency, throughput, fault tolerance, observability, security, and maintainability—into all architectural designs. Incorporate LLM , Agentic AI , and foundation model design patterns where appropriate, ensuring performance and operational compliance in real-world deployments. Enterprise Delivery and Vision Lead the translation of research and experimentation into production-grade solutions with measurable impact on business KPIs (both top-line growth and bottom-line efficiency). Perform deep-dive gap analysis in existing software and data pipelines and develop long-term architectural solutions and migration strategies. Build architectures that thrive under enterprise constraints , such as regulatory compliance, resource limits, multi-tenancy, and lifecycle governance. AI/ML Engineering and MLOps Design and implement scalable MLOps workflows , integrating CI/CD pipelines, experiment tracking, automated validation, and model retraining loops. Operationalize AI pipelines using Azure Machine Learning (Azure ML) services and ensure seamless collaboration with data science and platform teams. Ensure architectures accommodate responsible AI , model explainability, and observability layers. Software Quality and Engineering Discipline Champion software engineering best practices with rigorous attention to: Code quality through static/dynamic analysis and automated quality metrics Code reviews , pair programming, and technical design documentation Unit, integration, and system testing , backed by frameworks like pytest, unit test, or Robot Framework Code quality tools such as SonarQube, CodeQL, or similar Drive the culture of traceability, testability, and reliability , embedding quality gates into the development lifecycle. Own the technical validation lifecycle , ensuring reproducibility and continuous monitoring post-deployment. Cloud-Native AI Infrastructure Architect AI services with cloud-native principles , including microservices, containers, and service mesh. Leverage Azure ML , Kubernetes , Terraform , and cloud-specific SDKs for full lifecycle management. Ensure compatibility with hybrid-cloud/on-premise environments and support constraints typical of engineering and industrial domains Qualifications Educational qualification: Masterís or Ph.D. in Computer Science, AI/ML, Software-Engineering, or a related technical discipline Experience: 15+ years in software development, including: Deep experience in AI/ML-based software systems Strong architectural leadership in enterprise software design Delivery experience in engineering-heavy and data-rich environments Mandatory/requires Skills: Programming : Python (required), Java, JS, Frontend/Backend Technologies, Databases C++ (bonus) AI/ML : TensorFlow, PyTorch, ONNX, scikit-learn, MLFlow(equivalents) LLM/GenAI : Knowledge of transformers, attention mechanisms, fine-tuning, prompt engineering Agentic AI : Familiarity with planning frameworks, autonomous agents, and orchestration layers Cloud Platforms : Azure (preferred), AWS or GCP; experience with Azure ML Studio and SDKs Data & Pipelines : Airflow, Kafka, Spark, Delta Lake, Parquet, SQL/NoSQL Architecture : Microservices, event-driven design, API gateways, gRPC/REST, secure multi-tenancy DevOps/MLOps : GitOps, Jenkins, Azure DevOps, Terraform, containerization (Docker, Helm, K8s) What You Bring Proven ability to bridge research and engineering in the AI/ML space with strong architectural clarity. Ability to translate ambiguous requirements into scalable design patterns . Deep understanding of the enterprise SDLC óincluding review cycles, compliance, testing, and cross-functional alignment. A mindset focused on continuous improvement, metrics-driven development , and transparent technical decision-making. Additional Information Why Bosch Research? At Bosch Research, you will be empowered to lead the architectural blueprint of AI/ML software products that make a tangible difference in industrial innovation. You will have the autonomy to architect with vision, scale with quality, and deliver with rigor—while collaborating with a global community of experts in AI, engineering, and embedded systems.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies