Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description DevOps Engineer/ Ci-CD Job Summary Hands on experience in CICD processes and tools like Git/Jenkins/Bamboo/bitbucket. Hands on scripting in Shell/Powershell and good to have python scripting skills. Hands on experience in integrating third party Apis with CICD pipelines. Hands on experience in Aws basic services like Ec2/LBs/SecurityGroups/RDS/VPCs and good to have Aws patch manager. Hands on experience troubleshooting application infra issue, should be able to explain 2-3 issues the resource worked in previous role. Should have knowledge on ITIL and Agile processes and willing support weekend deployment and patching activities. Should be a quick learner and adopt to emerging technologies/processes and must have sense of ownership/urgency to deliver. DevOps - Hands-on in AWS EKS & EFS Strong knowledge in Kubernates Experience in Enterprise Terraform Strong knowledge in Helm Charts & Helmfile Knowledge in Linux OS - Knowledge in Git & GitOps Experience in BitBucket & Bamboo - Knowledge in systems & applicatoin security priciples Experience in automate and improve development; and release processes; CI/CD pipelines Work with developers to ensure the development process is followed accross the board Experience in setting up build & deployment pipelines for Web, iOS & Android Knowledge & experience in mobile application release process Knowlegde in InTune, MobileIron & Perfecto - Understanding of application test automation & reporting Understanding of inftrastructure & application health monitoring and reporting Skills Devops Tools,Devops,Cloud Computing Show more Show less
Posted 1 month ago
4.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
AI Fullstack Engineer Experience: 4 - 9 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Python, Angular, AWS HurixDigital (One of Uplers' Clients) is Looking for: AI Fullstack Engineer who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Job Description Key Responsibilities Develop and deploy end-to-end AI-powered applications leveraging full-stack development best practices. Architect and integrate AI/ML models and pipelines using tools like LangChain, Hugging Face, OpenAI, and Anthropic Claude APIs. Design and implement microservices, RESTful APIs, and backend systems with scalability and maintainability in mind. Leverage cloud platforms (AWS, Azure, GCP) for hosting, automation, and scaling services. Integrate CI/CD pipelines to ensure smooth and frequent deployment cycles. Collaborate with cross-functional teams to translate business requirements into technical solutions. Apply techniques such as content chunking, vector search, embedding models, and retrievers to build advanced AI retrieval systems. Ensure robust architecture that can withstand variable load conditions, focusing on fault tolerance and high availability. Maintain documentation and adhere to best practices in software development and AI model integration. Required Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 5+ years of hands-on experience in AI engineering and full-stack development. Strong proficiency with AI/ML tools including LangChain, Hugging Face, OpenAI, and Claude APIs. Deep understanding of vector databases, retrievers, and modern NLP workflows. Proficient in one or more full-stack frameworks (e.g., Node.js, Django, React, Angular). Experience with cloud platforms (AWS, Azure, GCP) and infrastructure automation tools (e.g., Terraform, CloudFormation). Solid experience with CI/CD pipelines, GitOps, and container orchestration (e.g., Docker, Kubernetes). Proven ability to architect resilient systems and optimize performance under fluctuating workloads. Preferred Skills Knowledge of prompt engineering and LLM fine-tuning techniques. Familiarity with DevSecOps practices and AI compliance requirements. Exposure to multimodal models and real-time inference systems. How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Groww We are a passionate group of people focused on making financial services accessible to every Indian through a multi-product platform. Each day, we help millions of customers take charge of their financial journey. Customer obsession is in our DNA. Every product, every design, every algorithm down to the tiniest detail is executed keeping the customers’ needs and convenience in mind. Our people are our greatest strength. Everyone at Groww is driven by ownership, customer-centricity, integrity and the passion to constantly challenge the status quo. Are you as passionate about defying conventions and creating something extraordinary as we are? Let’s chat. Our Vision Every individual deserves the knowledge, tools, and confidence to make informed financial decisions. At Groww, we are making sure every Indian feels empowered to do so through a cutting-edge multi-product platform offering a variety of financial services. Our long-term vision is to become the trusted financial partner for millions of Indians. Our Values Our culture enables us to be what we are — India’s fastest-growing financial services company. It fosters an environment where collaboration, transparency, and open communication take center-stage and hierarchies fade away. There is space for every individual to be themselves and feel motivated to bring their best to the table, as well as craft a promising career for themselves. The values that form our foundation are: Radical customer centricity Ownership-driven culture Keeping everything simple Long-term thinking Complete transparency EXPERTISE AND QUALIFICATIONS What youʼll do: Providing 24X7 infra & platform support for the Data Platform infrastructure setup hosting the workloads for the Data engineering teams and also building processes and documenting “tribal” knowledge around the same time. Managing application deployment & GKE platforms - automate and improve development and release processes. Creating, managing and maintaining datastores & data platform infra using IaC. Owning the end-to-end Availability, Performance, Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki. Owning and onboarding new applications with the production readiness review process. Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors. Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts. Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks. Identifying observability gaps in application & infrastructure and working with stakeholders to fix them. Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation. Automate toil and repetitive work. What We're Looking For: 5-8 Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills. Has handled and worked on distributed processing engines , distributed databases and messaging queues ( Kafka , PubSub or RabbitMQ etc Experienced in setting up , working on data platforms, data lakes, and data ingestion systems that work at scale. Write core libraries (in python and golang) to interact with various internal data stores. Define and support internal SLAs for common data infrastructure Good to have familiarity with BigQuery or Trino , Pinot , Airflow , and Superset or similar ones ( good to have familiarity with Mongo and Redis ) Experience in troubleshooting, managing and deploying containerized environments using Docker/container, Kubernetes is a must. Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure. Experience in building the CI/CD pipelines with any one the tools such as Jenkins, GitLab, Spinnaker, Argo etc. Show more Show less
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About Groww We are a passionate group of people focused on making financial services accessible to every Indian through a multi-product platform. Each day, we help millions of customers take charge of their financial journey. Customer obsession is in our DNA. Every product, every design, every algorithm down to the tiniest detail is executed keeping the customers’ needs and convenience in mind. Our people are our greatest strength. Everyone at Groww is driven by ownership, customer-centricity, integrity and the passion to constantly challenge the status quo. Are you as passionate about defying conventions and creating something extraordinary as we are? Let’s chat. Our Vision Every individual deserves the knowledge, tools, and confidence to make informed financial decisions. At Groww, we are making sure every Indian feels empowered to do so through a cutting-edge multi-product platform offering a variety of financial services. Our long-term vision is to become the trusted financial partner for millions of Indians. Our Values Our culture enables us to be what we are — India’s fastest-growing financial services company. It fosters an environment where collaboration, transparency, and open communication take center-stage and hierarchies fade away. There is space for every individual to be themselves and feel motivated to bring their best to the table, as well as craft a promising career for themselves. The values that form our foundation are: Radical customer centricity Ownership-driven culture Keeping everything simple Long-term thinking Complete transparency EXPERTISE AND QUALIFICATIONS What youʼll do: Providing 24X7 infra & platform support for the Data Platform infrastructure setup hosting the workloads for the Data engineering teams and also building processes and documenting “tribal” knowledge around the same time. Managing application deployment & GKE platforms - automate and improve development and release processes. Creating, managing and maintaining datastores & data platform infra using IaC. Owning the end-to-end Availability, Performance, Capacity of applications and their infrastructure and creating/maintaining the respective observability with Prometheus/New Relic/ELK/Loki. Owning and onboarding new applications with the production readiness review process. Managing the SLO/Error Budgets/Alerts and performing root cause analysis for production errors. Working with Core Infra, Dev and Product teams to define SLO/Error Budgets/Alerts. Working with the Dev team to have an in-depth understanding of the application architecture and its bottlenecks. Identifying observability gaps in application & infrastructure and working with stakeholders to fix them. Managing outages and doing detailed RCA with developers and identifying ways to avoid that situation. Automate toil and repetitive work. What We're Looking For: 3+ Years of experience in managing high traffic, large scale microservices and infrastructure with excellent troubleshooting skills. Has handled and worked on distributed processing engines , distributed databases and messaging queues ( Kafka , PubSub or RabbitMQ etc Experienced in setting up , working on data platforms, data lakes, and data ingestion systems that work at scale. Write core libraries (in python and golang) to interact with various internal data stores. Define and support internal SLAs for common data infrastructure Good to have familiarity with BigQuery or Trino , Pinot , Airflow , and Superset or similar ones ( good to have familiarity with Mongo and Redis ) Experience in troubleshooting, managing and deploying containerized environments using Docker/container, Kubernetes is a must. Extensive experience in DNS, TCP/IP, UDP, GRPC, Routing and Load Balancing. Expertise in GitOps, Infrastructure as a Code tool such as Terraform etc.. and Configuration Management Tools such as Chef, Puppet, Saltstack, Ansible. Expertise in Google Cloud (GCP) and/or other relevant Cloud Infrastructure solutions like AWS or Azure. Experience in building the CI/CD pipelines with any one the tools such as Jenkins, GitLab, Spinnaker, Argo etc. Show more Show less
Posted 1 month ago
3.0 - 5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk DevOps Engineers are coders who enjoy a challenge and will be responsible for automating and streamlining our operations and processes, building and maintaining tools for deployment, monitoring, and operations, and troubleshooting and resolving issues in our dev, test, and production environments. As a DevOps Engineer, you will partner closely with software engineers, QA, and product teams to design and implement robust CI/CD pipelines , define infrastructure through code, and create tools that empower developers to ship high-quality features faster. You’ll actively contribute to cloud-native development practices , introduce automation wherever possible, and champion a culture of continuous improvement, observability, and developer experience (DX) . Your day-to-day work will involve a mix of platform/DevOps engineering , build/release automation , Kubernetes orchestration , infrastructure provisioning , and monitoring/alerting strategy development . You will also help enforce secure coding and deployment standards, contribute to runbooks and incident response procedures, and help scale systems to support rapid product growth. This is a hands-on technical role that requires strong coding ability, cloud architecture experience, and a mindset that thrives on collaboration, ownership, and resilience engineering . Qualifications Collaborate with developers to ensure seamless CI/CD workflows using tools like GitHub Actions, Jenkins CI/CD, and GitOps Write automation and deployment scripts in Groovy, Python, Go, Bash, PowerShell or similar Implement and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation Build and manage containerized applications using Docker and orchestrate using Kubernetes (EKS, AKS, GKE) Manage and optimize cloud infrastructure on AWS Implement automated security and compliance checks using the latest security scanning tools like Snyk, Checkmarx, and Codacy. Develop and maintain monitoring, alerting, and logging systems using Datadog, Prometheus, Grafana, Datadog, ELK, or Loki Drive observability and SLO/SLA adoption across services Support development teams in debugging, environment management, and rollout strategies (blue/green, canary deployments) Contribute to code reviews and build automation libraries for internal tooling and shared platforms Additional Information Requirements: 3 - 5 years of experience focused on DevOps Engineering, Cloud administration, or platform engineering, and application development Strong hands-on experience in: Linux/Unix and Windows OS Network architecture and security configurations Hands-on experience with the following scripting technologies: Automation/Configuration management using either Ansible, Puppet, Chef, or an equivalent Python, Ruby, Bash, PowerShell Hands-on experience with IAC (Infrastructure as code) like Terraform, CloudFormation Hands-on experience with Cloud infrastructure such as AWS, Azure, GCP Excellent communication skills, and strong attention to detail Strong hands-on technical abilities Strong computer literacy and/or the comfort, ability, and desire to advance technically Strong understanding of Information Security in various environments Demonstrated ability to assume sole and independent responsibilities Ability to keep track of numerous detail-intensive, interdependent tasks and ensure their accurate completion Preferred Tools & Technologies: Languages: Python, Go, Bash, YAML, PowerShell Version Control & CI/CD: Git, GitHub Actions, GitLab CI, Jenkins, GitOps IaC: Terraform, CloudFormation Containers: Docker, Kubernetes, Helm Monitoring & Logging: Datadog, Prometheus, Grafana, ELK/EFK Stack Cloud Platforms: AWS (EC2, ECS, EKS, Lambda, S3, Newtorking/VPC, cost optimization) Security: HashiCorp Vault, Trivy, Aqua, OPA/Gatekeeper Databases & Caches: PostgreSQL, MySQL, Redis, MongoDB Others: NGINX, Istio, Consul, Kafka, Redis Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
Remote
Job Title – DevOps Engineer (Kubernetes Specialist) Experience - 6+ yrs Location - Remote Looking for Immediate joiners/ someone who can join within a month only. Job Role: Skills- DevOps- 6+ yrs CI/CD Pipeline Kubernetes- 3+ Yrs Mandatory skills- Devops- 6+Yrs Exp in Kubernetes- 3 + yrs Golang - Important Elastic Search, Logstash, Kibana [ELK], Prometheus Develop and maintain custom CRDs (Custom Resource Definitions) and controllers to extend Kubernetes functionality for our platform. Job Description - DevOps Engineer (Kubernetes Specialist) JD: Develop and maintain custom CRDs (Custom Resource Definitions) and controllers to extend Kubernetes functionality for our platform. Utilise Golang to build Services, Controllers, and Crossplane Functions Ensure availability, and performance of the platform itself Automate deployment, monitoring, and management processes using CI/CD pipelines and GitOps principles Elastic Search, Logstash, Kibana [ELK], Prometheus Should be a Kubernetes Expert Interested candidates kindly share your updated CV on email - sarah@r2rconsults.com Show more Show less
Posted 1 month ago
0 years
0 Lacs
India
On-site
We are seeking a hands-on DevOps Engineer with a strong command of make and deployment automation. This role is perfect for someone who lives and breathes Makefiles, understands the power of declarative builds, and can create maintainable, reproducible pipelines from scratch.If you’re passionate about automation and enjoy working on systems like MCP servers, this is your chance to own and optimize the infrastructure from the ground up.Key Responsibilities Write, manage, and optimize Makefiles for build and deployment automation across multiple services. Deploy and manage MCP servers and other custom backend infrastructure. Build and maintain robust CI/CD pipelines for dev-to-prod workflows. Improve developer productivity by simplifying build, test, and release cycles through make. Collaborate with developers to integrate infrastructure into their local and cloud environments. Set up observability tools and monitor infrastructure health and performance. What We’re Looking For Deep expertise in make – not just using it, but structuring complex Makefiles that are clean, efficient, and extensible. Experience deploying and managing production-grade services (e.g., MCP servers). Proficiency in CI/CD tools (GitHub Actions, GitLab CI, Jenkins, etc.). Familiarity with Docker, Kubernetes, or other container/orchestration systems. Strong scripting knowledge (e.g., Bash, Python). Cloud experience (AWS, GCP, or Azure). Bonus Points For Infrastructure as Code experience (Terraform, Ansible). GitOps and release automation tooling. System tuning and performance profiling skills. Experience in low-latency or real-time systems. Why You’ll Love Working With Us Direct ownership of core infrastructure systems. Work on unique deployment challenges with make at the heart of automation. Be part of a team that values clean systems, fast delivery, and minimal ops overhead. Hourly rate: 400-500/Hour Weekly: 5 - 10 HOURS Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Bengaluru, Karnataka
Remote
Category: Infrastructure/Cloud Main location: India, Karnataka, Bangalore Position ID: J0425-1242 Employment Type: Full Time Position Description: Company Profile: At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve. At CGI, we’re a team of builders. We call our employees members because all who join CGI are building their own company - one that has grown to 72,000 professionals located in 40 countries. Founded in 1976, CGI is a leading IT and business process services firm committed to helping clients succeed. We have the global resources, expertise, stability and dedicated professionals needed to achieve results for our clients - and for our members. Come grow with us. Learn more at www.cgi.com. This is a great opportunity to join a winning team. CGI offers a competitive compensation package with opportunities for growth and professional development. Benefits for full-time, permanent members start on the first day of employment and include a paid time-off program and profit participation and stock purchase plans. We wish to thank all applicants for their interest and effort in applying for this position, however, only candidates selected for interviews will be contacted. No unsolicited agency referrals please. Job Title: Google Cloud Engineer (DevOps + GKE )- SSE Position: Senior Systems Engineer Experience:5+Years Category: GCP+GKE Main location: Bangalore/Chennai/Hyderabad/Pune/Mumbai Position ID: J0425-1242 Employment Type: Full Time Job Description : We are seeking a skilled and proactive Google Cloud Engineer with strong experience in DevOps with hands-on expertise in Google Kubernetes Engine (GKE) to design, implement, and manage cloud-native infrastructure . You will play a key role in automating deployments, maintaining scalable systems, and ensuring the availability and performance of our cloud services on Google Cloud Platform (GCP). Key Responsibilities and Required Skills 5+ years of experience in DevOps / Cloud Engineering roles. Design and manage cloud infrastructure using Google Cloud services such as Compute Engine, Cloud Storage, VPC, IAM, Cloud SQL, GKE, and more. Proficient in writing Infrastructure-as-Code using Terraform, Deployment Manager, or similar tools. Automate CI/CD pipelines using tools like Cloud Build, Jenkins, GitHub Actions, etc. Manage and optimize Kubernetes clusters for high availability, performance, and security. Collaborate with developers to containerize applications and streamline their deployment. Monitor cloud environments and troubleshoot performance, availability, or security issues. Implement best practices for cloud governance, security, cost management, and compliance. Participate in cloud migration and modernization projects. Ensure system reliability and high availability through redundancy, backup strategies, and proactive monitoring. Contribute to cost optimization and cloud governance practices. Strong hands-on experience with core GCP services including Compute, Networking, IAM, Storage, and optional Kubernetes (GKE). Proven expertise in Kubernetes (GKE)—managing clusters, deployments, services, autoscaling, etc. Experience in Configuring Kubernetes resources (Deployments, Services, Ingress, Helm charts, etc.) to support application lifecycles. Solid scripting knowledge (e.g., Python, Bash, Go). Familiarity with GitOps and deployment tools like ArgoCD, Helm. Experience with CI/CD tools and setting up automated deployment pipelines. Should have Google Cloud certifications (e.g., Professional Cloud DevOps Engineer, Cloud Architect, or Cloud Engineer). Behavioural Competencies : Proven experience of delivering process efficiencies and improvements Clear and fluent English (both verbal and written) Ability to build and maintain efficient working relationships with remote teams Demonstrate ability to take ownership of and accountability for relevant products and services Ability to plan, prioritise and complete your own work, whilst remaining a team player Willingness to engage with and work in other technologies Note: This job description is a general outline of the responsibilities and qualifications typically associated with the Virtualization Specialist role. Actual duties and qualifications may vary based on the specific needs of the organization. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodations for people with disabilities in accordance with provincial legislation. Please let us know if you require a reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Your future duties and responsibilities Required qualifications to be successful in this role Skills: DevOps Google Cloud Platform Kubernetes Terraform Helm What you can expect from us: Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world.
Posted 1 month ago
2.0 - 5.0 years
0 Lacs
Surat, Gujarat, India
On-site
Job Profile Details Position: Senior Full Stack Developer Commitment: Full-time (8 hours, 5 days a week) Experience level: 2-5 years Location: Surat We specialize in cloud consultation and security solutions. We build premium software for our partners aiming to change lives. Our mission is to build technology people would use, enjoy, and cherish. We help businesses of all kinds to grow and reach their goals by adopting bold and cutting-edge technologies and ideas. Position Overview We are seeking a highly skilled and motivated Senior Full Stack Developer to join our dynamic team. The ideal candidate is passionate about creating high-quality, scalable, and secure web applications. As a Senior Full Stack Developer, you will play a pivotal role in the development and maintenance of our software solutions, from the front-end to the back-end, and be involved in various aspects of the development lifecycle. Key Responsibilities Frontend Development: Develop and maintain responsive web applications using React.js and Vue.js. Ensure user-friendly interfaces, optimal performance, and cross-browser compatibility. Implement modern front-end technologies and best practices. Backend Development Build robust and efficient server-side applications using Node.js, Typescript, and JavaScript. Design and maintain databases, including Redis, PostgreSQL, and Cassandra. Create and optimize RESTful and GraphQL APIs for seamless data exchange. DevOps And Cloud Collaborate with DevOps and Infrastructure teams to implement GitOps workflows. Participate in CI/CD processes to ensure a smooth and automated deployment pipeline. Work with AWS services to manage and deploy cloud-based solutions. Utilize Docker, Kubernetes, and Terraform for containerization and orchestration. Security Maintain a strong focus on security best practices and ensure web and network security. Contribute to DevSecOps practices to identify and mitigate security vulnerabilities. Scalability And Performance Design and develop high-availability, scalable, fault-tolerant, and performant systems. Optimize application performance, monitoring, and error handling. System Design And Architecture Collaborate with the architecture team to design and implement scalable and maintainable software systems. Participate in code reviews and architectural discussions. Develop and maintain event-driven systems, ensuring asynchronous communication and scalability. Design Principles And Best Practices Implement and promote software engineering principles, including SOLID, DRY, KISS, and YAGNI. Apply design patterns and object-oriented programming (OOP) concepts for clean and maintainable code. Debugging And Monitoring Proficient in debugging techniques and tools to diagnose and resolve issues. Implement monitoring and logging solutions to proactively identify and address potential problems. Qualifications Proven experience in full-stack development with expertise in React.js, Vue.js, Node.js, Typescript, and JavaScript. Strong knowledge of Git, GitOps, CI/CD pipelines, and DevSecOps practices. Proficiency in web and network security, with an understanding of best practices. Experience with both RESTful and GraphQL APIs. Familiarity with AWS services and cloud infrastructure. Database management skills with Redis, PostgreSQL, and Cassandra. Experience with containerization tools like Docker, orchestration tools like Kubernetes, and infrastructure-as-code with Terraform. A strong emphasis on designing high-availability, scalable, fault-tolerant, and performant systems. Understanding of system design and architecture principles. Solid understanding of design principles (SOLID, DRY, KISS, YAGNI), design patterns, and object-oriented programming (OOP). Ability to design and implement event-driven architectures. Proficiency in debugging and monitoring techniques and tools. Strong teamwork and collaboration skills, including pair programming and mentoring. A passion for learning and staying updated with the latest industry trends and technologies. Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA R&R: DevOps Engineer - Azure NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients' success. What You'll Be Doing R&R: DevOps Engineer - Azure NTT DATA, Inc. is seeking a talented DevOps Engineer to join our dynamic team. As a leading solution provider company, we are committed to delivering exceptional solutions to our clients. Our success is driven by the dedication and expertise of our employees, who play a vital role in shaping our growth and staying ahead of the competition. By joining our team, you will have the opportunity to work with cutting-edge technologies and make a significant impact on our clients' success. Primary Responsibilities of this role: As a DevOps Engineer you will be responsible for the smooth operation of our customers infrastructure. You will collaborate closely with internal teams and client organizations focusing on automation, continuous integration/delivery (CI/CD), infrastructure management, and collaboration to improve software delivery speed and quality for our clients. What you will do: Engage in Azure DevOps administration Respond to platform to performance and availability issues Open and follows tickets with Vendor product owners Manages license assignment and allocations Install approved Azure Market Place plugins Provide general supports to app teams for supported DevOps tools Troubleshoot Azure DevOps issues and related to DevOps toolsets and deployment capabilities Work in general backlog of support tickets Manage and support Artifact Management (Jfrog) Managing and support Artifact Management (SonarQube) Operate as a member of global, distributed teams that deliver quality services Ability to collaborate and communicate appropriately with project stakeholders: status updates, concerns, risks, and issues Rapidly gain knowledge of emerging technologies in cloud and their potential application in the customer environment / impact analysis What you will bring: 5+ years of experience in IT 2-3 years of experience Azure DevOps as well as general DevOps toolsets. Experience with Agile and Scrum concepts Solid working knowledge of GitOps concepts, CI/CD pipeline design and tools including Azure DevOps, Git, Jenkins, Jfrog, and SonarQube Ability to work independently and as a productive team member, actively participating in agile ceremonies Ability to identify potential issues and take ownership of resolving existing issues Strong analytical skills, Curious nature, Strong communication skills; written and verbal Team-oriented, motivated, and enthusiastic with the willingness to take initiative and maintain a positive approach Ability to work with a virtual team Excellent communication, presentation, and relationship-building skills Solid understanding of enterprise operational processes, planning, and operations Workplace type: Hybrid Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today. Show more Show less
Posted 1 month ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a highly skilled Kubernetes Subject Matter Expert (SME) to join our team. The ideal candidate will have 7+ years of industry experience, with Minimum 3+ years of expertise in Kubernetes and DevSecOps . The role requires hands-on experience with multi-cloud environments, preferably Azure and AWS. The candidate must hold Certified Kubernetes Administrator (CKA) OR Certified Kubernetes Security Specialist (CKS) OR Certified Kubernetes Application Developer (CKAD) certifications and have a strong track record of implementing Kubernetes at scale for large production environments. Responsibilities Design, deploy, and optimize Kubernetes-based platforms for large-scale production workloads. Implement DevSecOps best practices to enhance the security and reliability of Kubernetes clusters. Manage Kubernetes environments across multi-cloud platforms (Azure, AWS) with a focus on resilience and high availability. Provide technical leadership in architecting, scaling, and troubleshooting Kubernetes ecosystems. Develop automation strategies using Infrastructure-as-Code (IaC) tools such as Terraform, Helm, and Ansible. Work with security teams to ensure compliance with industry security standards and best practices. Define and implement observability and monitoring using tools like Prometheus, Grafana, and ELK Stack. Lead incident response and root cause analysis for Kubernetes-related production issues. Guide and mentor engineering teams on Kubernetes, service mesh, and container security. Qualifications Bachelor's degree in Computer Science, Engineering, or a related field. Master's degree is a plus. 7+ years of industry experience in cloud infrastructure, container orchestration, and DevSecOps. 3+ years of hands-on experience with Kubernetes in production environments. Strong knowledge of Kubernetes security, RBAC, Network Policies, and admission controllers. Experience in multi-cloud environments (Azure, AWS preferred). Hands-on experience with Istio or other service meshes. Expertise in containerization technologies like Docker. Proficiency with CI/CD pipelines (GitOps, ArgoCD, Jenkins, Azure DevOps, or similar). Experience with Kubernetes storage and networking in enterprise ecosystems. Deep understanding of Kubernetes upgrade strategies, scaling, and optimization. Must have CKA or CKS or CKAD certification. Show more Show less
Posted 1 month ago
5.0 - 8.0 years
4 - 7 Lacs
Chennai
On-site
Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for DevOps professionals with 5 to 8 years of experience in Cloud Infrastructure maintenance and operations. Strong hands-on experience with Azure services (Compute, Networking, Storage, and Security). Expertise in Infrastructure as Code (IaC) using Bicep / ARM / Terraform (Bicep / ARM templates experience is a plus). Proficiency in managing and optimizing CI/CD pipelines in Azure DevOps. In-depth knowledge of networking concepts (VNETs, Subnets, DNS, Load Balancers, VPNs). Proficiency in scripting with PowerShell, Azure CLI, or Python for automation. Strong knowledge of Git and version control best practices. Infrastructure Design & Management: Architect and manage Azure cloud infrastructure for scalability, high availability, and cost efficiency. Deploy and maintain Azure services such as Virtual Machines, App Services, Kubernetes (AKS), Storage, and Databases. Implement networking solutions like Virtual Networks, VPN Gateways, NSGs, and Private Endpoints CI/CD Pipeline Management: Design, build, and maintain Azure DevOps pipelines for automated deployments. Implement GitOps and branching strategies to streamline development workflows. Ensure efficient release management and deployment automation using Azure DevOps, GitHub Actions, or Jenkins. Infrastructure as Code (IaC): Write, maintain, and optimize Bicep / ARM / Terraform templates for infrastructure provisioning. Automate resource deployment and configuration management using Azure CLI, PowerShell etc Security & Compliance: Implement Azure security best practices, including RBAC, Managed Identities, Key Vault, and Azure Policy. Monitor and enforce network security with NSGs, Azure Firewall, and DDoS protection. Ensure compliance with security frameworks such as CIS, NIST, ASB etc. Conduct security audits, vulnerability assessments, and enforce least privilege access controls. Monitoring & Optimization: Set up Azure Monitor, Log Analytics, and Application Insights for performance tracking and alerting. Optimize infrastructure for cost efficiency and performance using Azure Advisor and Cost Management. Troubleshoot and resolve infrastructure-related incidents in production and staging environments. Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with: Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at: www.siemens.com/careers
Posted 1 month ago
5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Role: Devops/ Site Reliability Engineer Duration: 6 Months (Possible Extension) Location: Pune(Onsite) Timings: Full Time (As per company timings) IST Notice Period: (Immediate Joiner - Only) Experience: 5+ Years About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience 5–10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking — understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills — capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Show more Show less
Posted 1 month ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Is your passion for Cloud Native Platform ? That is, envisioning and building the core services that underpin all Thomson Reuters’ products? Then we want you on our India -based team! This role is in the Platform Engineering organization where we build the foundational services that power Thomson Reuters’ products. We focus on the subset of capabilities that help Thomson Reuters deliver digital products to our customers. Our mission is to build a durable competitive advantage for TR by providing “building blocks” that get value-to-market faster. About The Role This role is within Platform Engineering’s Service Mesh team, a dedicated group which engineers and operates our Service Mesh capability, which is a microservice platform based on Kubernetes and Istio. Establish software engineering best practices; provide tooling that makes compliance frictionless. Drive a strong emphasis on test and deployment automation Participate in all aspects of the development lifecycle: Ideation, Design, Build, Test and Operate. We embrace a DevOps culture (“you build it, you run it”); while we have dedicated 24x7 level-1 support engineers, you may be called on to assist with level-2 support Collaboration with all product teams; transparency is a must! Collaborate with development managers, architects, scrum masters, software engineers, DevOps engineers, product managers and project managers to deliver phenomenal software Ongoing participation in a Scrum team, and an embrace of the agile work model Keep up to date with emerging cloud technology trends especially in CNCF landscape About You You're a fit for the role of Software Engineer if you have: 2+ years software development experience 1+ years of experience building cloud native infrastructure, applications and services on AWS, Azure or GCP Hands-on experience with Kubernetes, ideally AWS EKS and/or Azure AKS Experience with Istio or other Service Mesh technologies Experience with container security and supply chain security Experience with declarative infrastructure-as-code, CI/CD automation and GitOps Experience with Kubernetes operators written in Golang A Bachelor's Degree in Computer Science, Computer Engineering or similar. What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com. Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About Autozone AutoZone is the USA’s leading retailer and a leading distributor of automotive replacement parts and accessories with more than 6,000 stores in US, Puerto Rico, Mexico, and Brazil. Each store carries an extensive line for cars, sport utility vehicles, vans and light trucks, including new and remanufactured hard parts, maintenance items and accessories. We also sell automotive diagnostic and repair software through ALLDATA, diagnostic and repair information through ALLDATAdiy.com, automotive accessories through AutoAnything.com and auto and light truck parts and accessories through AutoZone.com. Since opening its first store in Forrest City, Ark. on July 4, 1979, the company has joined the New York Stock Exchange (NYSE: AZO) and earned a spot in the Fortune 500. AutoZone has been committed to providing the best parts, prices, and customer service in the automotive aftermarket industry. We have a rich culture and history of going the Extra Mile for our customers and our community. At AutoZone you’re not just doing a job; you’re playing a crucial role in creating a better experience for our customers, while creating opportunities to DRIVE YOUR CAREER almost anywhere! We are looking for talented people who are customer focused, enjoy helping others and have the DRIVE to excel in a fast-paced environment! Position Summary As a Cloud Platform Engineer, you will be responsible for deploying, managing, and optimizing our cloud-based infrastructure, focusing on technologies such as Terraform, Kubernetes, GitOps/ArgoCD, CI/CD, GitLab, Ansible, and more. You will collaborate with cross-functional teams to ensure seamless deployment and delivery of applications while maintaining the highest standards of reliability, security, and scalability. Key Responsibilities Create cloud infrastructure using Terraform to automate provisioning and scaling of resources. Create Terraform modules for the new GCP resources Work with any GCP compute engines to deploy, configure, and maintain containerized applications and microservices. Create runbooks for the standard cloud operations and implementations Develop and maintain CI/CD pipelines to automate the build, test, and deployment processes. Implement and maintain GitOps practices using ArgoCD for declarative, automated application deployment and management. Collaborate with development teams to ensure the integration of GitLab and other version control tools into the CI/CD workflow. Monitor and optimize system performance, troubleshoot issues, and ensure high availability and scalability of cloud services. Participate, troubleshoot and fix on any outages in on-call rotation to provide 24/7 support for critical infrastructure incidents. Configure and setup the monitoring for infrastructure resources Requirements Hands-on experience in cloud infrastructure management, preferably with GCP. Hands-on expertise in Terraform for infrastructure as code (IaC) and automation. Hands-on experience with Kubernetes for container orchestration. Proficiency in CI/CD tools, with a focus on GitLab CI/CD, Familiarity with GitOps practices and ArgoCD. Understanding of Linux systems and networking. Excellent problem-solving and communication skills. Ability to work effectively in a collaborative, cross-functional team environment. Education And Experience Bachelor's degree in information technology, MIS, Computer Science or related field required Typically requires six to ten years’ experience within the skills outlined above Written and deployed mission critical workloads to the public cloud (preferably Google Cloud) Experience with modern container orchestration systems: Kubernetes Experience in setting/selecting and documenting technology standards for a development organization Our Values An AutoZoner Always... PUTS CUSTOMERS FIRST CARES ABOUT PEOPLE STRIVES FOR EXCEPTIONAL PERFORMANCE ENERGIZES OTHERS EMBRACES DIVERSITY HELPS TEAMS SUCCEED Show more Show less
Posted 1 month ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
About AutoZone AutoZone is the USA’s leading retailer and a leading distributor of automotive replacement parts and accessories with more than 6,000 stores in US, Puerto Rico, Mexico, and Brazil. Each store carries an extensive line for cars, sport utility vehicles, vans and light trucks, including new and remanufactured hard parts, maintenance items and accessories. We also sell automotive diagnostic and repair software through ALLDATA, diagnostic and repair information through ALLDATAdiy.com, automotive accessories through AutoAnything.com and auto and light truck parts and accessories through AutoZone.com. Since opening its first store in Forrest City, Ark. on July 4, 1979, the company has joined the New York Stock Exchange (NYSE: AZO) and earned a spot in the Fortune 500. AutoZone has been committed to providing the best parts, prices, and customer service in the automotive aftermarket industry. We have a rich culture and history of going the Extra Mile for our customers and our community. At AutoZone you’re not just doing a job; you’re playing a crucial role in creating a better experience for our customers, while creating opportunities to DRIVE YOUR CAREER almost anywhere! We are looking for talented people who are customer focused, enjoy helping others and have the DRIVE to excel in a fast-paced environment! Position Summary As a Cloud Platform Engineer, you will be responsible for deploying, managing, and optimizing our cloud-based infrastructure, focusing on technologies such as Terraform, Kubernetes, GitOps/ArgoCD, CI/CD, GitLab, Ansible, and more. You will collaborate with cross-functional teams to ensure seamless deployment and delivery of applications while maintaining the highest standards of reliability, security, and scalability. Key Responsibilities Design, implement, and manage cloud infrastructure using Terraform to automate provisioning and scaling of resources. Work with Kubernetes/CloudRun or any GCP compute engines to deploy, configure, and maintain containerized applications and microservices. Implement and maintain GitOps practices using ArgoCD for declarative, automated application deployment and management. Develop and maintain CI/CD pipelines to automate the build, test, and deployment processes. Collaborate with development teams to ensure the integration of GitLab and other version control tools into the CI/CD workflow. Monitor and optimize system performance, troubleshoot issues, and ensure high availability and scalability of cloud services. Collaborate with the security team to implement security best practices and ensure compliance with industry standards. Participate, troubleshoot and fix on any outages in on-call rotation to provide 24/7 support for critical infrastructure incidents. Stay up-to-date with industry best practices and emerging technologies to continuously improve cloud operations. Requirements Proven experience in cloud infrastructure management, preferably with GCP. Strong expertise in Terraform for infrastructure as code (IaC) and automation. Hands-on experience with Kubernetes for container orchestration. Proficiency in CI/CD tools, with a focus on GitLab CI/CD,Familiarity with GitOps practices and ArgoCD. Understanding of Linux systems and networking. Knowledge of security best practices and compliance standards. Excellent problem-solving and communication skills. Ability to work effectively in a collaborative, cross-functional team environment Education And Experience Bachelor's degree in information technology, MIS, Computer Science or related field required Typically requires eight to fifteen years’ experience within the skills outlined above Written and deployed mission critical workloads to the public cloud (preferably Google Cloud) Experience with modern container orchestration systems: Kubernetes Experience in setting/selecting and documenting technology standards for a development organization Has lead engineering teams or direct work effort of technical resources. Our Values An AutoZone Always... PUTS CUSTOMERS FIRST CARES ABOUT PEOPLE STRIVES FOR EXCEPTIONAL PERFORMANCE ENERGIZES OTHERS EMBRACES DIVERSITY HELPS TEAMS SUCCEED Show more Show less
Posted 1 month ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
About Aeris: For more than three decades, Aeris has been a trusted cellular IoT leader enabling the biggest IoT programs and opportunities across Automotive, Utilities and Energy, Fleet Management and Logistics, Medical Devices, and Manufacturing. Our IoT technology expertise serves a global ecosystem of 7,000 enterprise customers and 30 mobile network operator partners, and 80 million IoT devices across the world. Aeris powers today’s connected smart world with innovative technologies and borderless connectivity that simplify management, enhance security, optimize performance, and drive growth. Built from the ground up for IoT and road-tested at scale, Aeris IoT Services are based on the broadest technology stack in the industry, spanning connectivity up to vertical solutions. As veterans of the industry, we know that implementing an IoT solution can be complex, and we pride ourselves on making it simpler.Our company is in an enviable spot. We’re profitable, and both our bottom line and our global reach are growing rapidly. We’re playing in an exploding market where technology evolves daily and new IoT solutions and platforms are being created at a fast-pace. A few things to know about us: We put our customers first. When making decisions, we always seek to do what is right for our customer first, our company second, our teams third, and individual selves last We do things differently.As a pioneer in a highly-competitive industry that is poised to reshape every sector of the global economy, we cannot fall back on old models. Rather, we must chart our own path and strive to out-innovate, out-learn, out-maneuver and out-pace the competition on the way We walk the walk on diversity.We’re a brilliant and eclectic mix of ethnicities, religions, industry experiences, sexual orientations, generations and more – and that’s by design. We see diverse perspectives as a core competitive advantage Integrity is essential.We believe in doing things well – and doing them right. Integrity is a core value here: you’ll see it embodied in our staff, our management approach and growing social impact work (we have a VP devoted to it). You’ll also see it embodied in the way we manage people and our HR issues: we expect employees and managers to deal with issues directly, immediately and with the utmost respect for each other and for the Company We are owners.Strong managers enable and empower their teams to figure out how to solve problems. You will be no exception, and will have the ownership, accountability and autonomy needed to be truly creative Aeris is looking for DevOps engineers who are eager to learn new skills and help develop services using the latest technology. You should be passionate about building scalable and highly available infrastructure and services. You will work closely with Aeris development teams to deliver a leading connected vehicle platform to automotive OEMs. Develop automation tools and framework for CI/CD Develop critical infrastructure components or systems and follow them through to production Build and support public cloud based SaaS and PaaS services Identify, build and improve tooling, processes, security and infrastructure that support Aeris cloud environments Identify, design, and develop automation solutions to create, manage and improve cloud infrastructure, builds and deployments Lead from Proof of Concept to implementation for critical infrastructure components and new DevSecOps tools and solutions Represent DevOps in design reviews and work cross-functionally with Engineering and Operation teams for operational readiness Dive deep to resolve problems at their root, looking for failure patterns and driving resolution Qualifications And Experience A Bachelor's degree in Engineering, around 10+ years of professional technology experience Experience deploying and running enterprise grade public cloud infrastructure, preferably with GCP Hands-on Automation with Terraform, Groovy and experience with CI-CD Hands-on experience in Linux/Unix environment and scripting languages: (eg Shell, Perl, Python, Javascript, Golang etc) Hands-on experience in two or more of the following areas Databases (NoSQL/ SQL): Hadoop, Cassandra, MySQL Messaging system configuration and maintenance (Kafka+Zookeeper, MQTT, RabbitMQ) WAF, CloudArmor, NGINX Apache/Tomcat/JBoss based web applications and services (REST) Observability stacks (eg ELK, Grafana Labs) Hands-on experience with Kubernetes (GKE, AKS) Hands-on experience with Jenkins GitOps experience is a plus Experience working with large Enterprise grade SAAS products Proven capability for critical thinking, problem solving and the patience to see hard problems through to the end Qualities Passionate about building highly available and reliable public cloud infrastructure Take ownership, make commitments, and deliver on your commitments Good communicator Team player who collaborates across different teams including DevSecOps, software development, and security Continuous improvement mindset What is in it for you? You get to build the next leading edge connected vehicle platform The ability to collaborate with our highly skilled groups who work with cutting edge technologies High visibility as you support the systems that drive our public facing services Career growth opportunities Aeris may conduct background checks to verify the information provided in your application and assess your suitability for the role. The scope and type of checks will comply with the applicable laws and regulations of the country where the position is based. Additional detail will be provided via the formal application process. Aeris walks the walk on diversity. We’re a brilliant mix of varying ethnicities, religions, cultures, sexual orientations, gender identities, ages and professional/personal/military experiences – and that’s by design. Diverse perspectives are essential to our culture, innovative process and competitive edge. Aeris is proud to be an equal opportunity employer. Powered by JazzHR kyfihqb6sN Show more Show less
Posted 1 month ago
15.0 years
0 Lacs
Chandigarh, India
On-site
Job Description Job Summary We are seeking a seasoned Observability Architect to define and lead our end-to-end observability strategy across highly distributed, cloud-native, and hybrid environments. This role requires a visionary leader with deep hands-on experience in New Relic and a strong working knowledge of other modern observability platforms like Datadog, Prometheus/Grafana, Splunk, OpenTelemetry, and more. You will design scalable, resilient, and intelligent observability solutions that empower engineering, SRE, and DevOps teams to proactively detect issues, optimize performance, and ensure system reliability. This is a senior leadership role with significant influence over platform architecture, monitoring practices, and cultural transformation across global teams. Key Responsibilities Architect and implement full-stack observability platforms, covering metrics, logs, traces, synthetics, real user monitoring (RUM), and business-level telemetry using New Relic and other tools like Datadog, Prometheus, ELK, or AppDynamics. Design and enforce observability standards and instrumentation guidelines for microservices, APIs, front-end applications, and legacy systems across hybrid cloud environments. Experience in OpenTelemetry adoption, ensuring vendor-neutral, portable observability implementations where appropriate. Build multi-tool dashboards, health scorecards, SLOs/SLIs, and integrated alerting systems tailored for engineering, operations, and executive consumption. Collaborate with engineering and DevOps teams to integrate observability into CI/CD pipelines, GitOps, and progressive delivery workflows. Partner with platform, cloud, and security teams to provide end-to-end visibility across AWS, Azure, GCP, and on-prem systems. Lead root cause analysis, system-wide incident reviews, and reliability engineering initiatives to reduce MTTR and improve MTBF. Evaluate, pilot, and implement new observability tools/technologies aligned with enterprise architecture and scalability requirements. Deliver technical mentorship and enablement, evangelizing observability best practices and nurturing a culture of ownership and data-driven decision-making. Drive observability governance and maturity models, ensuring compliance, consistency, and alignment with business SLAs and customer experience goals. Required Qualifications 15+ years of overall IT experience, hands-on with application development, system architecture, operations in complex distributed environments, troubleshooting and integration for applications and other cloud technology with observability tools. 5+ years of hands-on experience with observability tools such as New relic, Datadog, Prometeus, etc. including APM, infrastructure monitoring, logs, synthetics, alerting, and dashboard creation. Proven experience and willingness to work with multiple observability stacks, such as: Datadog, Dynatrace, AppDynamics Prometheus, Grafana, etc. Elasticsearch, Fluentd, Kibana (EFK/ELK) Splunk, OpenTelemetry, Solid knowledge of Kubernetes, service mesh (e.g., Istio), containerization (Docker) and orchestration strategies. Strong experience with DevOps and SRE disciplines, including CI/CD, IaC (Terraform, Ansible), and incident response workflows. Fluency in one or more programming/scripting languages: Java, Python, Go, Node.js, Bash. Hands-on expertise in cloud-native observability services (e.g., CloudWatch, Azure Monitor, GCP Operations Suite). Excellent communication and stakeholder management skills, with the ability to align technical strategies with business goals. Preferred Qualifications Architect level Certifications in New Relic, Datadog, Kubernetes, AWS/Azure/GCP, or SRE/DevOps practices. Experience with enterprise observability rollouts, including organizational change management. Understanding of ITIL, TOGAF, or COBIT frameworks as they relate to monitoring and service management. Familiarity with AI/ML-driven observability, anomaly detection, and predictive alerting. Why Join Us? Lead enterprise-scale observability transformations impacting customer experience, reliability, and operational excellence. Work in a tool-diverse environment, solving complex monitoring challenges across multiple platforms. Collaborate with high-performing teams across development, SRE, platform engineering, and security. Influence strategy, tooling, and architecture decisions at the intersection of engineering, operations, and business. Apply Now Show more Show less
Posted 1 month ago
5.0 years
0 Lacs
Pune/Pimpri-Chinchwad Area
On-site
Job Title: Azure DevOps Engineer Location: Pune Experience: 5-7 Years Job Description 5+ years of Platform Engineering, DevOps, or Cloud Infrastructure experience Platform Thinking: Strong understanding of platform engineering principles, developer experience, and self-service capabilities Azure Expertise: Advanced knowledge of Azure services including compute, networking, storage, and managed services Infrastructure as Code: Proficient in Terraform, ARM templates, or Azure Bicep with hands-on experience in large-scale deployments DevOps and Automation CI/CD Pipelines: Expert-level experience with Azure DevOps, GitHub Actions, or Jenkins Automation Scripting: Strong programming skills in Python, PowerShell, or Bash for automation and tooling Git Workflows: Advanced understanding of Git branching strategies, pull requests, and code review processes Cloud Architecture and Security Cloud Architecture: Deep understanding of cloud design patterns, microservices, and distributed systems Security Best Practices: Implementation of security scanning, compliance automation, and zero-trust principles Networking: Advanced Azure networking concepts including VNets, NSGs, Application Gateways, and hybrid connectivity Identity Management: Experience with Azure Active Directory, RBAC, and identity governance Monitoring and Observability Azure Monitor: Advanced experience with Azure Monitor, Log Analytics, and Application Insights Metrics and Alerting: Implementation of comprehensive monitoring strategies and incident response Logging Solutions: Experience with centralized logging and log analysis platforms Performance Optimization: Proactive performance monitoring and optimization techniques Roles And Responsibilities Platform Development and Management Design and build self-service platform capabilities that enable development teams to deploy and manage applications independently Create and maintain platform abstractions that simplify complex infrastructure for development teams Develop internal developer platforms (IDP) with standardized templates, workflows, and guardrails Implement platform-as-a-service (PaaS) solutions using Azure native services Establish platform standards, best practices, and governance frameworks Infrastructure as Code (IaC) Design and implement Infrastructure as Code solutions using Terraform, ARM templates, and Azure Bicep Create reusable infrastructure modules and templates for consistent environment provisioning Implement GitOps workflows for infrastructure deployment and management Maintain infrastructure state management and drift detection mechanisms Establish infrastructure testing and validation frameworks DevOps and CI/CD Build and maintain enterprise-grade CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools Implement automated testing strategies including infrastructure testing, security scanning, and compliance checks Create deployment strategies including blue-green, canary, and rolling deployments Establish branching strategies and release management processes Implement secrets management and secure deployment practices Platform Operations and Reliability Implement monitoring, logging, and observability solutions for platform services Establish SLAs and SLOs for platform services and developer experience metrics Create self-healing and auto-scaling capabilities for platform components Implement disaster recovery and business continuity strategies Maintain platform security posture and compliance requirements Preferred Qualifications Bachelor’s degree in computer science or a related field (or equivalent work experience) Show more Show less
Posted 1 month ago
130.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Northern Trust Northern Trust, a Fortune 500 company, is a globally recognized, award-winning financial institution that has been in continuous operation since 1889. Northern Trust is proud to provide innovative financial services and guidance to the world’s most successful individuals, families, and institutions by remaining true to our enduring principles of service, expertise, and integrity. With more than 130 years of financial experience and over 22,000 partners, we serve the world’s most sophisticated clients using leading technology and exceptional service. About The Role The Northern Trust Company seeks a senior Lead, DevSecOps and AI Engineer that will assist in the execution of the strategic vision of Northern Trust Asset Management’s Distribution Technology capabilities. The successful candidate will be strong technical leader and will be responsible in the development of a comprehensive technology platform. We are looking for a DevSecOps & AI Engineer to join our high-performing team at the intersection of cloud infrastructure, applied AI and application security. This is a hands-on engineering role designed for someone who thrives on solving complex technical problems and shaping secure, scalable, cloud-native platforms. You’ll work across a broad spectrum of engineering pipelines—from web applications to microservices, infrastructure-as-code, and AI/ML workflows, primarily on Microsoft Azure . This role offers a unique opportunity to embed security and automation into every layer of our digital delivery, within the demanding and highly regulated context of financial services . Job Description And Key Responsibilities Build and secure multi-domain CI/CD pipelines, including Application development (e.g., microservices, UI) Infrastructure provisioning using Terraform Machine Learning and AI deployment workflows on Azure AI Design and implement DevSecOps automation, integrating security tools and compliance gates (SAST, DAST, IaC scanning, etc.) into GitHub Actions pipelines. Provision and manage Azure cloud environments, including AKS, AI services, and other resources through infrastructure-as-code. Collaborate closely with developers, data scientists, platform engineers, and security stakeholders to ensure secure-by-design delivery across all technology layers. Monitor and observe systems in production using tools like Azure Application Insights, Dynatrace and proactively resolve issues. Contribute to architecture decisions that balance agility, performance, security, and regulatory requirements. Must-Have Skills What We’re Looking For 10+ years of total experience in software development. 5+ years of experience in DevOps, Site Reliability Engineering or Platform Engineering roles Proven ability to engineer and scale complex systems in production Able to guide and mentor junior developers Solid understanding of DevSecOps principles and integration into CI/CD pipelines Strong expertise in: Azure cloud services and ecosystem (e.g. AKS, Azure AI, Azure Resource Manager etc.) Infrastructure-as-Code with Terraform GitHub / GitHub Actions Security tooling: Checkmarx, SonarQube, Wiz, Microsoft Defender for Cloud Monitoring and observability (Azure Application Insights, Dynatrace or similar) Nice-to-Have Skills Familiarity with MLOps / securing AI/ML pipelines. Awareness of data privacy, risk, and model governance frameworks in AI. Experience working in or with highly regulated industries. Knowledge of and hands-on experience with GitOps Soft Skills Excellent communicator and team collaborator across domains. Strong problem-solving mindset with attention to both system design and implementation details. Comfortable in ambiguous, fast-evolving environments typical of finance and banking. Working With Us As a Northern Trust partner, greater achievements await. You will be part of a flexible and collaborative work culture in an organization where financial strength and stability is an asset that emboldens us to explore new ideas. Movement within the organization is encouraged, senior leaders are accessible, and you can take pride in working for a company committed to assisting the communities we serve! Join a workplace with a greater purpose. We’d love to learn more about how your interests and experience could be a fit with one of the world’s most admired and sustainable companies! Build your career with us and apply today. #MadeForGreater Reasonable accommodation Northern Trust is committed to working with and providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation for any part of the employment process, please email our HR Service Center at MyHRHelp@ntrs.com. We hope you’re excited about the role and the opportunity to work with us. We value an inclusive workplace and understand flexibility means different things to different people. Apply today and talk to us about your flexible working requirements and together we can achieve greater. Show more Show less
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hello Visionary ! We empower our people to stay resilient and relevant in a constantly changing world. We’re looking for people who are always searching for creative ways to grow and learn. People who want to make a real impact, now and in the future. We are looking for DevOps professionals with 5 to 8 years of experience in Cloud Infrastructure maintenance and operations. Strong hands-on experience with Azure services (Compute, Networking, Storage, and Security). Expertise in Infrastructure as Code (IaC) using Bicep / ARM / Terraform (Bicep / ARM templates experience is a plus). Proficiency in managing and optimizing CI/CD pipelines in Azure DevOps. In-depth knowledge of networking concepts (VNETs, Subnets, DNS, Load Balancers, VPNs). Proficiency in scripting with PowerShell, Azure CLI, or Python for automation. Strong knowledge of Git and version control best practices. Infrastructure Design & Management: Architect and manage Azure cloud infrastructure for scalability, high availability, and cost efficiency. Deploy and maintain Azure services such as Virtual Machines, App Services, Kubernetes (AKS), Storage, and Databases. Implement networking solutions like Virtual Networks, VPN Gateways, NSGs, and Private Endpoints CI/CD Pipeline Management: Design, build, and maintain Azure DevOps pipelines for automated deployments. Implement GitOps and branching strategies to streamline development workflows. Ensure efficient release management and deployment automation using Azure DevOps, GitHub Actions, or Jenkins. Infrastructure as Code (IaC): Write, maintain, and optimize Bicep / ARM / Terraform templates for infrastructure provisioning. Automate resource deployment and configuration management using Azure CLI, PowerShell etc Security & Compliance: Implement Azure security best practices, including RBAC, Managed Identities, Key Vault, and Azure Policy. Monitor and enforce network security with NSGs, Azure Firewall, and DDoS protection. Ensure compliance with security frameworks such as CIS, NIST, ASB etc. Conduct security audits, vulnerability assessments, and enforce least privilege access controls. Monitoring & Optimization: Set up Azure Monitor, Log Analytics, and Application Insights for performance tracking and alerting. Optimize infrastructure for cost efficiency and performance using Azure Advisor and Cost Management. Troubleshoot and resolve infrastructure-related incidents in production and staging environments. Make your mark in our exciting world at Siemens . This role, based in Chennai , is an individual contributor position. You may be required to visit other locations within India and internationally. In return, you'll have the opportunity to work with teams shaping the future. At Siemens, we are a collection of over 312,000 minds building the future, one day at a time, worldwide. We are dedicated to equality and welcome applications that reflect the diversity of the communities we serve. All employment decisions at Siemens are based on qualifications, merit, and business need. Bring your curiosity and imagination, and help us shape tomorrow We’ll support you with: Hybrid working opportunities. Diverse and inclusive culture. Variety of learning & development opportunities. Attractive compensation package. Find out more about Siemens careers at: www.siemens.com/careers Show more Show less
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description About CyberArk : CyberArk (NASDAQ: CYBR), is the global leader in Identity Security. Centered on privileged access management, CyberArk provides the most comprehensive security offering for any identity – human or machine – across business applications, distributed workforces, hybrid cloud workloads and throughout the DevOps lifecycle. The world’s leading organizations trust CyberArk to help secure their most critical assets. To learn more about CyberArk, visit our CyberArk blogs or follow us on X, LinkedIn or Facebook. Job Description CyberArk DevOps Engineers are coders who enjoy a challenge and will be responsible for automating and streamlining our operations and processes, building and maintaining tools for deployment, monitoring, and operations, and troubleshooting and resolving issues in our dev, test, and production environments. As a DevOps Engineer, you will partner closely with software engineers, QA, and product teams to design and implement robust CI/CD pipelines , define infrastructure through code, and create tools that empower developers to ship high-quality features faster. You’ll actively contribute to cloud-native development practices , introduce automation wherever possible, and champion a culture of continuous improvement, observability, and developer experience (DX) . Your day-to-day work will involve a mix of platform/DevOps engineering , build/release automation , Kubernetes orchestration , infrastructure provisioning , and monitoring/alerting strategy development . You will also help enforce secure coding and deployment standards, contribute to runbooks and incident response procedures, and help scale systems to support rapid product growth. This is a hands-on technical role that requires strong coding ability, cloud architecture experience, and a mindset that thrives on collaboration, ownership, and resilience engineering . Qualifications Collaborate with developers to ensure seamless CI/CD workflows using tools like GitHub Actions, Jenkins CI/CD, and GitOps Write automation and deployment scripts in Groovy, Python, Go, Bash, PowerShell or similar Implement and maintain Infrastructure as Code (IaC) using Terraform or AWS CloudFormation Build and manage containerized applications using Docker and orchestrate using Kubernetes (EKS, AKS, GKE) Manage and optimize cloud infrastructure on AWS Implement automated security and compliance checks using the latest security scanning tools like Snyk, Checkmarx, and Codacy. Develop and maintain monitoring, alerting, and logging systems using Datadog, Prometheus, Grafana, Datadog, ELK, or Loki Drive observability and SLO/SLA adoption across services Support development teams in debugging, environment management, and rollout strategies (blue/green, canary deployments) Contribute to code reviews and build automation libraries for internal tooling and shared platforms Additional Information Requirements: 5-8 years of experience focused on DevOps Engineering, Cloud administration, or platform engineering, and application development Strong hands-on experience in: Linux/Unix and Windows OS Network architecture and security configurations Hands-on experience with the following scripting technologies: Automation/Configuration management using either Ansible, Puppet, Chef, or an equivalent Python, Ruby, Bash, PowerShell Hands-on experience with IAC (Infrastructure as code) like Terraform, CloudFormation Hands-on experience with Cloud infrastructure such as AWS, Azure, GCP Excellent communication skills, and strong attention to detail Strong hands-on technical abilities Strong computer literacy and/or the comfort, ability, and desire to advance technically Strong understanding of Information Security in various environments Demonstrated ability to assume sole and independent responsibilities Ability to keep track of numerous detail-intensive, interdependent tasks and ensure their accurate completion Preferred Tools & Technologies: Languages: Python, Go, Bash, YAML, PowerShell Version Control & CI/CD: Git, GitHub Actions, GitLab CI, Jenkins, GitOps IaC: Terraform, CloudFormation Containers: Docker, Kubernetes, Helm Monitoring & Logging: Datadog, Prometheus, Grafana, ELK/EFK Stack Cloud Platforms: AWS (EC2, ECS, EKS, Lambda, S3, Newtorking/VPC, cost optimization) Security: HashiCorp Vault, Trivy, Aqua, OPA/Gatekeeper Databases & Caches: PostgreSQL, MySQL, Redis, MongoDB Others: NGINX, Istio, Consul, Kafka, Redis Show more Show less
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Software Engineer Overview At Mastercard, we are dedicated to delivering unparalleled customer experiences by pushing the boundaries of innovation. Our Network team is seeking a Senior Software Engineer to propel our customer experience strategy forward through unwavering innovation and adept problem-solving. The quintessential candidate is focused on the customer experience journey, exuding high motivation, an insatiable intellectual curiosity, exceptional analytical acumen, and a robust entrepreneurial mindset. About The Role Design and Develop software around internet traffic engineering technologies including but not limited to public and private CDNs, load balancing, DNS, DHCP and IPAM solutions. Enthusiastic for building new platform technologies from ground up contributing to our high impact environment. Develop and Maintain Public and Private REST APIs, keeping high code and quality standards. Provide timely and competent support for the technologies the team owns and builds. Bridge automation gaps by writing and maintaining scripts that enhance automation and improve the overall quality of our services. Demonstrate drive and curiosity by continuously learning and teaching yourself new skills. Collaborate effectively with cross-functional teams, demonstrating your technical prowess and contributing to the overall success of the projects. All About You To excel in this role, you should have: Bachelor’s degree in Computer Science or a related technical field, or equivalent practical experience. Strong fundamentals in internet and intranet traffic engineering, OSI Layers & Protocols, DNS, DHCP, IP address management and TCP/HTTP processing. Practical understanding of data structures, algorithms, and database fundamentals. Proficient in Java, Python, SQL, NoSQL, Kubernetes, PCF, Jenkins, Chef and related platforms. Knowledgeable in cloud-native and multi-tiered applications development. Experience with programming around Network services, Domain Nameservers, DHCP and IPAM solutions. Understands the fundamental principles behind CI/CD pipelines, DevSecOps, GitOps, and related best practices. Ability to write and maintain scripts for automation, showcasing your commitment to efficiency and innovation. Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-250617 Show more Show less
Posted 1 month ago
0 years
0 - 0 Lacs
Gurgaon
On-site
Job description Job Summary: We are looking for a skilled MLOps Engineer who specializes in deploying and managing machine learning models using cloud-native CI/CD pipelines , FastAPI , and Kubernetes , without Docker . The ideal candidate should be well-versed in scalable model serving, API development, and infrastructure automation on the cloud using native container alternatives or pre-built images. Key Responsibilities: Design, develop, and maintain CI/CD pipelines for ML model training, testing, and deployment on cloud platforms (Azure/AWS/GCP) . Develop REST APIs using FastAPI for model inference and data services. Deploy and orchestrate microservices and ML workloads on Kubernetes clusters (EKS, AKS, GKE, or on-prem K8s). Implement model monitoring, logging, and version control without Docker-based containers. Utilize alternatives such as Singularity, Buildah, or cloud-native container orchestration . Automate deployment pipelines using tools like GitHub Actions, GitLab CI, Jenkins, Azure DevOps , etc. Manage secrets, configurations, and infrastructure using Kubernetes secrets, ConfigMaps, Helm, or Kustomize . Work closely with Data Scientists and Backend Engineers to integrate ML models with APIs and UIs. Optimize performance, scalability, and reliability of ML services in production. Required Skills: Strong experience with Kubernetes (deployment, scaling, Helm/Kustomize) . Deep understanding of CI/CD tools like Jenkins, GitHub Actions, GitLab CI/CD, or Azure DevOps. Experience with FastAPI for high-performance ML/REST APIs. Proficient in cloud platforms (AWS, GCP, or Azure) for ML pipeline orchestration. Experience with non-Docker containerization or deployment tools (e.g., Singularity , Podman , or OCI-compliant methods ). Strong Python skills and familiarity with ML libraries and model serialization (e.g., Pickle, ONNX, TorchServe). Good understanding of DevOps principles, GitOps, and IaC (Terraform or similar) . Preferred Qualifications: Experience with Kubeflow, MLflow , or similar tools. Familiarity with model monitoring tools like Prometheus, Grafana, or Seldon Core . Understanding of security and compliance in production ML systems. Bachelor's or Master’s degree in Computer Science, Engineering, or related field. Industry Technology, Information and Internet Employment Type Full-time Job Types: Full-time, Permanent Pay: ₹35,000.00 - ₹50,000.00 per month Work Location: In person
Posted 1 month ago
3.0 years
0 Lacs
Bengaluru
On-site
We help the world run better At SAP, we enable you to bring out your best. Our company culture is focused on collaboration and a shared passion to help the world run better. How? We focus every day on building the foundation for tomorrow and creating a workplace that embraces differences, values flexibility, and is aligned to our purpose-driven and future-focused work. We offer a highly collaborative, caring team environment with a strong focus on learning and development, recognition for your individual contributions, and a variety of benefit options for you to choose from. About the team: SAP's Business Technology Platform is shaping the future of enterprise software, creating the ability to extend and personalize SAP applications, integrate, and connect entire landscapes, and empower business users to integrate processes and experiences. Our BTP AI team aims to be on top of the latest advancements in AI and how we apply these to increase the platform value for our customers and partners. It is a multi-disciplinary team of data engineers, engagement leads, development architects and developers that aims to support and deliver AI cases in the context of BTP. The Role: We are looking for a Full-Stack Engineer with a passion for strategic cloud platform topics and the field of generative AI. Generative artificial intelligence (GenAI) has emerged as a transformative force in society, and has the ability of creating, mimicking, and innovating a wide range of domains. This has implications for enterprise software and ultimately SAP. Become part of a multi-disciplinary team that focuses on execution and shaping the future of GenAI capabilities across our Business Technology Platform. This role requires a self-directed team player with deep coding knowledge and business acumen. In this role, you will be working closely with Architects, Data Engineers, UX designers and many others. Your responsibility as Full-Stack Engineer will be to: Iterate rapidly, collaborating with product and design to launch to build PoCs and first versions of new products & features. Work with engineers across the company to ship modular and integrated products and features. You feel home at both Typescript/ Node.js/edit stack accordingly backend parts as well as being comfortable with the UI5/ JavaScript/React/edit stack accordingly frontend and other software technologies including Rest/ JSON. Design AI based solutions to complex problems and requirements in collaboration with others in a cross-functional team. Assess new technologies in the field of AI, tools, and infrastructure with which to evolve existing highly used functionalities and services in the cloud. Design, maintain, and optimize data infrastructure for data collection, management, transformation, and access. Role Requirement: Experience in data-centric programming languages (e.g. Python), SQL databases (e.g. SAP HANA Cloud), data modeling, integration, and schema design is a plus. Excellent communication skills with fluency in written and spoken English Tech you bring: SAP BTP, Cloud Foundry, Kyma, Docker, Kubernetes, SAP CAP, Jenkins, Git, GitOps; (Python) Critical thinking, innovative mindset, problem solving mindset Engineering/ master’s degree in computer science or related field with 3+ years professional experience in Software Development Extensive experience in the full life cycle of software development, from design and implementation to testing and deployment. Ability to thrive in a collaborative environment involving many different teams and stakeholders. You enjoy working with a diverse group of people with different expertise backgrounds and perspectives. Aware of the fast-changing AI landscape and confident to suggest new, innovative ways to achieve product features. Proficient in writing clean and scalable code using the programming languages in the AI and BTP technology stack. Ability to adapt to evolving technologies and industry trends in GenAI while working in a cross-functional team, showing creative problem-solving skills and customer-centricity. #SAPInternalT2 Bring out your best SAP innovations help more than four hundred thousand customers worldwide work together more efficiently and use business insight more effectively. Originally known for leadership in enterprise resource planning (ERP) software, SAP has evolved to become a market leader in end-to-end business application software and related services for database, analytics, intelligent technologies, and experience management. As a cloud company with two hundred million users and more than one hundred thousand employees worldwide, we are purpose-driven and future-focused, with a highly collaborative team ethic and commitment to personal development. Whether connecting global industries, people, or platforms, we help ensure every challenge gets the solution it deserves. At SAP, you can bring out your best. We win with inclusion SAP’s culture of inclusion, focus on health and well-being, and flexible working models help ensure that everyone – regardless of background – feels included and can run at their best. At SAP, we believe we are made stronger by the unique capabilities and qualities that each person brings to our company, and we invest in our employees to inspire confidence and help everyone realize their full potential. We ultimately believe in unleashing all talent and creating a better and more equitable world. SAP is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to the values of Equal Employment Opportunity and provide accessibility accommodations to applicants with physical and/or mental disabilities. If you are interested in applying for employment with SAP and are in need of accommodation or special assistance to navigate our website or to complete your application, please send an e-mail with your request to Recruiting Operations Team: Careers@sap.com For SAP employees: Only permanent roles are eligible for the SAP Employee Referral Program, according to the eligibility rules set in the SAP Referral Policy. Specific conditions may apply for roles in Vocational Training. EOE AA M/F/Vet/Disability: Qualified applicants will receive consideration for employment without regard to their age, race, religion, national origin, ethnicity, age, gender (including pregnancy, childbirth, et al), sexual orientation, gender identity or expression, protected veteran status, or disability. Successful candidates might be required to undergo a background verification with an external vendor. Requisition ID: 421073 | Work Area: Software-Design and Development | Expected Travel: 0 - 10% | Career Status: Professional | Employment Type: Regular Full Time | Additional Locations: #LI-Hybrid.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France