Jobs
Interviews

994 Gitops Jobs - Page 25

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll also be: Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll also need: 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 month ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 12 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 12+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

India

On-site

Job Description Does an opportunity to build solutions for large scale suit you? Do hybrid local/cloud infrastructures interest you? Join our Engineering team This team is a part of the Cloud Security Intelligence group. Together, we own one of the largest Big Data environments in Israel. The team owns various Intelligence Security products that run as part of this environment. We are also responsible for innovatively developing and maintaining the platform itself. Make a difference in your own way You'll be working on innovating and developing a new and ground-breaking Big Data platform. It provides services for the rest of the Platform and Akamai engineering groups. We strive to accelerate development, reduce operational costs, and provide common secured common services As a Senior DevOps Engineer, you will be responsible for: Designing and implementing infrastructure solutions on top of Azure and Linode - Kubernetes Kafka, vault, storage, etc. Developing and Provisioning infrastructure applications, and monitoring tools e.g. OpenSearch/ELK, OpenTelematry, Prometheus, Grafana, Pushgatway & etc. Building and maintaining CI/CD pipelines using Jenkins, In addition to building GitOps solutions such as ArgoCD Working in all stages of the software release process in all development and production environments Do What You Love To be successful in this role you will: Have 5+ years' experience as a DevOps Engineer and Bachelor's degree in Computer Science or it's equivalent Be proficient in working in Linux/Unix environments, and demonstrate solid experience in Python and shell scripting Have proven experience in designing and implementing solutions for Kubernetes Have experience setting up large-scale container technology (Docker, Kubernetes, etc.) and migrating/creating systems on cloud environments (Azure/AWS/GCP) Be responsible, self-motivated, and able to work with little or no supervision Have attention to detail and excellent troubleshooting skills Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Akamai Technologies is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of gender, gender identity, sexual orientation, race/ethnicity, protected veteran status, disability, or other protected group status. Show more Show less

Posted 1 month ago

Apply

2.0 years

0 Lacs

India

On-site

Job Description Does an opportunity to build solutions for large scale suit you? Do hybrid local/cloud infrastructures interest you? Join our Engineering team This team is a part of the Cloud Security Intelligence group. Together, we own one of the largest Big Data environments in Israel. The team owns various Intelligence Security products that run as part of this environment. We are also responsible for innovatively developing and maintaining the platform itself. Make a difference in your own way You'll be working on innovating and developing a new and ground-breaking Big Data platform. It provides services for the rest of the Platform and Akamai engineering groups. We strive to accelerate development, reduce operational costs, and provide common secured common services As a Software Engineer II-DevOps, you will be responsible for: Designing and implementing infrastructure solutions on top of Azure and Linode - Kubernetes Kafka, vault, storage, etc. Developing and Provisioning infrastructure applications, and monitoring tools e.g. OpenSearch/ELK, OpenTelematry, Prometheus, Grafana, Pushgatway & etc. Building and maintaining CI/CD pipelines using Jenkins, In addition to building GitOps solutions such as ArgoCD Working in all stages of the software release process in all development and production environments Do What You Love To be successful in this role you will: Have 2+ years' experience as a DevOps Engineer and Bachelor's degree in Computer Science or it's equivalent Be proficient in working in Linux/Unix environments, and demonstrate solid experience in Python and shell scripting Have experience in Infrastructure as Code (IaC) using Terraform, managing/deploying applications using Helm charts in Kubernetes environments. Have proven experience in designing and implementing solutions for Kubernetes Have experience setting up large-scale container technology (Docker, Kubernetes, etc.) and migrating/creating systems on cloud environments (Azure/AWS/GCP) Be responsible, self-motivated, and able to work with little or no supervision and have attention to detail Work in a way that works for you FlexBase, Akamai's Global Flexible Working Program, is based on the principles that are helping us create the best workplace in the world. When our colleagues said that flexible working was important to them, we listened. We also know flexible working is important to many of the incredible people considering joining Akamai. FlexBase, gives 95% of employees the choice to work from their home, their office, or both (in the country advertised). This permanent workplace flexibility program is consistent and fair globally, to help us find incredible talent, virtually anywhere. We are happy to discuss working options for this role and encourage you to speak with your recruiter in more detail when you apply. Learn what makes Akamai a great place to work Connect with us on social and see what life at Akamai is like! We power and protect life online, by solving the toughest challenges, together. At Akamai, we're curious, innovative, collaborative and tenacious. We celebrate diversity of thought and we hold an unwavering belief that we can make a meaningful difference. Our teams use their global perspectives to put customers at the forefront of everything they do, so if you are people-centric, you'll thrive here. Working for you Benefits At Akamai, we will provide you with opportunities to grow, flourish, and achieve great things. Our benefit options are designed to meet your individual needs for today and in the future. We provide benefits surrounding all aspects of your life: Your health Your finances Your family Your time at work Your time pursuing other endeavors Our benefit plan options are designed to meet your individual needs and budget, both today and in the future. About Us Akamai powers and protects life online. Leading companies worldwide choose Akamai to build, deliver, and secure their digital experiences helping billions of people live, work, and play every day. With the world's most distributed compute platform from cloud to edge we make it easy for customers to develop and run applications, while we keep experiences closer to users and threats farther away. Join us Are you seeking an opportunity to make a real difference in a company with a global reach and exciting services and clients? Come join us and grow with a team of people who will energize and inspire you! Akamai Technologies is an Affirmative Action, Equal Opportunity Employer that values the strength that diversity brings to the workplace. All qualified applicants will receive consideration for employment and will not be discriminated against on the basis of gender, gender identity, sexual orientation, race/ethnicity, protected veteran status, disability, or other protected group status. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderābād

On-site

Summary: Are you passionate about cutting-edge cloud-native platforms and driven to build the foundational services that power enterprise-grade products? We're seeking a highly skilled and strategic Senior Product Manager (Technical) to own the Plexus Application Infrastructure Platform, a critical component of our cloud-native ecosystem. This pivotal role within our Platform Engineering organization is central to our mission: to build a durable competitive advantage by providing robust "building blocks" that accelerate value-to-market for all Thomson Reuters' products. Thomson Reuters leads at the intersection of content and technology with trusted data, workflow automation, and AI. You'll be instrumental in shaping the future of our digital product delivery, working closely with the dedicated Plexus Service Mesh team, which engineers and operates our sophisticated microservice platform based on Kubernetes and Istio. If you're ready to Compete to Win by driving innovation and helping us Obsess over our Customers by delivering exceptional infrastructure, we want to hear from you. About the Role As the Senior Product Manager (Technical) for the Plexus Application Infrastructure Platform, you will be the driving force behind our Service Mesh capability, a critical microservice platform built on Kubernetes and Istio. Your responsibilities will be diverse and impactful, requiring a strategic mindset and a collaborative spirit: Define and Champion Product Strategy: Develop and own the product vision, strategy, and roadmap for the Plexus Application Infrastructure Platform, aligning it with overall organizational goals and anticipating future technology trends, especially within the CNCF landscape. Obsess Over Our Customers: Serve as the authoritative voice of the customer for engineering teams, deeply understanding their needs and translating complex infrastructure challenges into clear, actionable requirements. You will prioritize the product backlog to maximize business and customer value, driving platform capabilities that foster adoption. Compete to Win: Proactively identify and assess new technologies, market trends, and competitive advantages in the cloud-native infrastructure space to ensure our platform remains at the forefront of innovation. Challenge Your Thinking: Advocate for innovative approaches to microservice architecture and platform design. You'll lead efforts to enhance transparency and collaboration across all product and engineering teams, always seeking better ways to build and deliver value. Act Fast. Learn Fast.: Exhibit extreme ownership of the platform's performance and reliability. You'll participate across the full development lifecycle—from Ideation and Design through Build, Test, and Operate—embracing our DevOps culture where 'you build it, you run it.' You'll continuously iterate, analyze metrics, and rapidly adapt to deliver an exceptional user experience. Stronger Together: Lead cross-functional product discovery and delivery, collaborating seamlessly with development managers, architects, scrum masters, software engineers, DevOps engineers, and other product managers. You will foster an environment where collective expertise achieves shared success. Drive Engineering Excellence: Establish and champion software engineering best practices, advocating for tooling that makes compliance frictionless and embedding a strong emphasis on test and deployment automation within our platform. About You We are looking for a visionary and hands-on leader who embodies a unique blend of deep technical understanding and astute product management expertise. You are someone who thrives in a dynamic, fast-paced environment and is driven to make a significant impact. Product Management: 5+ years of progressive experience in Product Management, with a significant portion dedicated to technical products, platforms, or infrastructure services. (Candidates with a strong software development background (e.g., 6+ years) looking to transition into a technical product management role for cloud-native platforms will also be highly considered). Cloud-Native Expertise: Deep technical acumen in cloud-native infrastructure, with hands-on experience building or managing platforms on major cloud providers (AWS, Azure, GCP). Containerization & Service Mesh: Expert-level understanding and practical experience with Kubernetes (ideally AWS EKS and/or Azure AKS) and Istio or other Service Mesh technologies. DevOps & Infrastructure-as-Code: Familiarity with container security, supply chain security, declarative infrastructure-as-code (e.g., Terraform), CI/CD automation, and GitOps workflows. Architectural Understanding: Strong understanding of microservice architectures, API design principles, and distributed systems. Programming (Beneficial): An understanding of modern programming paradigms and languages (e.g., Golang, Python, Java) is highly beneficial, enabling effective collaboration with engineering teams. Problem-Solving: Exceptional problem-solving abilities, capable of dissecting complex technical challenges and translating them into clear product opportunities and solutions. Communication & Influence: Outstanding communication skills, with the ability to articulate complex technical concepts and product strategies clearly and concisely to diverse audiences, from engineers to executive leadership. Collaboration: A collaborative spirit and a history of successfully leading cross-functional teams, fostering an environment where every voice contributes to building the best possible platform. Strategic & Agile: Strategic thinking with a talent for balancing long-term vision with short-term execution. A strong sense of urgency, an agile mindset, and an insatiable curiosity that drives continuous learning and innovation. You're unafraid to challenge assumptions and push boundaries, constantly seeking better ways to build and deliver value. Customer Empathy: A customer-centric approach with a passion for understanding and addressing internal and external customer needs. Education: A bachelor’s degree in business administration, computer science, computer engineering, a related technical field or equivalent work experience Relevant certifications (e.g., Certified Kubernetes Administrator (CKA), Product Management certifications, or cloud platform certifications) are a plus. #LI-SS What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 month ago

Apply

3.0 years

8 - 9 Lacs

Gurgaon

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job overview and responsibilities United Airlines is seeking talented people to join the Data Engineering team. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. You will work as a Senior Engineer - Machine Learning and collaborate with data scientists and data engineers to: Build high-performance, cloud-native machine learning infrastructure and services to enable rapid innovation across United Build complex data ingestion and transformation pipelines for batch and real-time data Support large scale model training and serving pipelines in distributed and scalable environment This position is offered on local terms and conditions within United’s wholly owned subsidiary United Airlines Business Services Pvt. Ltd. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications Required BS/BA, in Advanced Computer Science, Data Science, Engineering or related discipline or Mathematics experience required Strong software engineering experience with Python and at least one additional language such as Go, Java, or C/C++ Familiarity with ML methodologies and frameworks (e.g., PyTorch, Tensorflow) and preferably building and deploying production ML pipelines Experience developing cloud-native solutions with Docker and Kubernetes Cloud-native DevOps, CI/CD experience using tools such as Jenkins or AWS CodePipeline; preferably experience with GitOps using tools such as ArgoCD, Flux, or Jenkins X Experience building real-time and event-driven stream processing pipelines with technologies such as Kafka, Flink, and Spark Experience setting up and optimizing data stores (RDBMS/NoSQL) for production use in the ML app context Strong desire to stay aligned with the latest developments in cloud-native and ML ops/engineering and to experiment with and learn new technologies Experience 3 + years of software engineering experience with languages such as Python, Go, Java, Scala, Kotlin, or C/C++ 2 + years of experience working in cloud environments (AWS preferred) 2 + years of experience with Big Data technologies such as Spark, Flink 2 + years of experience with cloud-native DevOps, CI/CD At least one year of experience with Docker and Kubernetes in a production environment Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: DevOps Engineer Location: Chennai (full-time, at office) Years of Experience: 4-8 years Job Summary: We are seeking a skilled DevOps engineer with knowledge of automation, continuous integration, deployment and delivery processes. The ideal candidate should be a self-starter with hands-on production experience, and having excellent communication skills. Key Responsibilities: ● Infrastructure as Code: first principles on cloud infrastructure, system design, and application deployments. ● CI/CD pipelines: to design, implement, troubleshoot, and maintain CI/CD pipelines. ● System administration: skills with systems, networking, and security fundamentals. ● Proficiency in coding: with hands-on experience in programming languages, and ability to write, review, and troubleshoot code for infrastructure. ● Monitoring and observability: to track performance and health of services and configure alerts with interactive dashboards for reporting. ● Security best practices and familiarity with audits, compliance, and regulation. ● Communication skills: to clearly and effectively discuss and collaborate across cross-functional teams. ● Documentation: using Agile methodologies, Jira, and Git. Qualification: ● Education: Bachelor's degree in CS, IT, or a related field (or equivalent work experience). ● Skills*: Infrastructure: Docker, Kubernetes, ArgoCD, Helm, Chronos, GitOps. Automation: Ansible, Puppet, Chef, Salt, Terraform, OpenTofu. CI/CD: Jenkins, CircleCI, ArgoCD, GitLab, GitHub Actions. Cloud platforms: Amazon Web Services (AWS), Azure, Google Cloud. Operating Systems: Windows, *nix distributions (Fedora, Red Hat, Ubuntu, Debian), *BSD, Mac OS X. Monitoring and observability: Prometheus, Grafana, Elasticsearch, Nagios. Databases: MySQL, PostgreSQL, MongoDB, Qdrant, Redis. Programming Languages: Python, Bash, JavaScript, TypeScript, Golang. Documentation: Atlassian Jira, Confluence, Git. (* Proficient in one or more tools in each category.) Additional Requirements: • Include GitHub or GitLab profile link in the resume. • Only candidates with a Computer Science or Information Technology engineering background will be considered. • Primary Operating System should be Linux (Ubuntu or any distribution) or macOS. Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 17 Lacs

Chennai

Work from Office

Responsibilities Implement and manage cloud infrastructure using Infrastructure as Code (IaC) for compute, storage, network services, and container/Kubernetes management to support high-volume, low-latency CAMS applications. Maintain deep understanding and oversight of all IaC solutions to ensure consistent, repeatable, and secure infrastructure capabilities that can scale on demand. Monitor and manage infrastructure performance to meet service level agreements (SLAs), control costs, and prioritize automation in all deployment processes. Ensure that infrastructure designs and architectures align with technical specifications and business requirements. Provide key support and contribute to the full lifecycle ownership of platform services. Adhere to DevOps principles and participate in end-to-end platform ownership, including occasional incident resolution outside normal hours as part of an on-call rota. Engage in project scoping, requirements analysis, and technical discovery to shape effective infrastructure solutions. Perform performance tuning, monitoring, and maintenance of fault-tolerant, highly available infrastructure to deliver scalable services. Maintain detailed oversight of automation processes and infrastructure security, implementing improvements as necessary. Support continuous improvement by researching alternative approaches and technologies and presenting recommendations for architectural review. Collaborate with teams to contribute to architectural design decisions. Utilize experience with CI/CD pipelines, GitOps, and Kubernetes management to streamline deployment and operations. Work Experience Over 7 years of proven hands-on technical experience. More than 5 years of experience leading and managing cloud infrastructure, including VPC, compute, storage, container services, Kubernetes, and related technologies. Strong Linux system administration skills across CentOS, Ubuntu, and GKE environments, including patching, configuration, and maintenance. Practical expertise with continuous integration tools such as Jenkins and GitLab, along with build automation and dependency management. Proven track record of delivering software releases on schedule. Committed to a collaborative working style and effective team communication, thriving in small, agile teams. Experience designing and implementing zero-downtime deployment solutions in cloud environments. Solid understanding of database and big data technologies, including both SQL and NoSQL systems. #Google Cloud Platform #Terraform #Git #GitOps #Kubernetes #Iac Please share your profiles to divyaa.m@camsonline.com

Posted 1 month ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

We are seeking an experienced DevOps Engineer to join our team. In this role, you will be responsible for designing, implementing, and maintaining secure cloud infrastructure using cloud-based technologies, including Oracle and Microsoft platforms. You will build and support scalable and reliable application systems and automate deployments. Additionally, you will integrate various systems and technologies using REST APIs and automate the software development and deployment lifecycle. Leveraging automation and monitoring tools, along with AI-powered solutions, you will ensure the smooth operation of our cloud-based systems. Key Areas of Responsibility Implement automation to control and orchestrate cloud workloads, managing the build and deployment cycles for each deployed solution via CI/CD. Utilize a wide variety of cloud-based services, including containers, App Services, API , and SaaS-oriented integration. GitHub and CI/CD tools (e.g., Jenkins, GitHub Actions, Maven/ANT). Create and maintain build and deployment configurations using Helm and Yaml. Manage the software change control process, including Quality Control and SCM audits, enforcing adherence to all change control and code management processes. Continuously manage and maintain releases, clear understanding of release management process Collaborate with cross-functional teams to ensure seamless integration and deployment of cloud-based solutions. Problem-solving, teamwork, and communication to emphasize the collaborative nature of the role. Perform builds and environment configurations. Required Skills and Experience 5+ years of overall experience, Expertise in automating the software development and deployment lifecycle using Jenkins, Github Actions, SAST, DAST, Compliances, and Oracle ERP DevOps tools. Proficient with Unix Shell Scripting, SQL*Plus, PL/SQL, and Oracle database objects. Understanding of branching models is important. Experience in creating cloud resources using automation tools. Strong hands-on experience with Terraform and Azure Infrastructure as Code (IaC). Hands-on experience in GitOps, Flux CD/Argo CD, Jenkins, Groovy. Building and deploying Java and .NET applications, Liquibase database deployments. Proficient with Azure cloud concepts, creating Azure Container Apps, Kubernetes, Load balancers, Az CLI, Kubectl, Observability, APM, App Performance reivews. Azure AZ-104 or AZ-400 Certification is a plus Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

With Confluent, organisations can harness the full power of continuously flowing data to innovate and win in the modern digital world. We have a purpose that drives us to do better everyday – we're creating an entirely new category within data infrastructure - data streaming. This technology will allow every organisation to create experiences and use the power of data in ways that profoundly impact the way we all live. This impact is our purpose and drives us to do better every day. One Confluent. One team. One Data Streaming Platform. Data Connects Us. About The Role Solutions Engineers at Confluent drive not only the early-stage evaluation within the sales process, but also play a crucial role in enabling ongoing value-realization for customers, all while helping them move up the adoption maturity curve. In this role you’ll partner with Account Executives to be the key technical advisor in service of the customer. You’ll be instrumental in surfacing the customers’ stated or implicit Business Needs, and coming up with Technical Designs to best meet these needs. You may find yourself at times facilitating art of the possible discussions and storytelling to inspire customers in adopting new patterns with confidence, and at other times driving creative solutioning to help get past difficult technical roadblocks. Overall, we look upon Solutions Engineers to be a key cog within the Customer Success Team that help foster an environment of sustained success for the customer and incremental adoption of Confluent’s Technology. What You Will Do Help advance new & innovative data streaming use-cases from conception to go-live Execute on and lead technical proof of concepts Conduct discovery & whiteboard Sessions to develop new use-cases Provide thought Leadership by delivering technical talks and workshops Guide customers with hands-on help and best practice to drive operational maturity of their Confluent deployment Analyze customer consumption trends and identify optimization opportunities Work closely with product and engineering teams, and serve as a key product advocate across the customer, partner and Industry ecosystem Forge strong relationships with key customer stakeholders and serve as a dependable partner for them What You Will Bring 5+ years of Sales/Pre-Sales/Solutions Engineering or similar customer facing experience in the software sales or implementation space Experience with event-driven architecture, data integration & processing techniques, database & data warehouse technologies, or related fields First-Hand exposure to cloud architecture, migrations, deployment & application development Experience with DevOps/Automation, GitOps or Kubernetes Ability to read & write Java, Python or SQL Clear, consistent demonstration of self-starter behavior, a desire to learn new things and tackle hard technical problems Exceptional presentation and communications capabilities. Confidence presenting to a highly skilled and experienced audience, ranging from developers to enterprise architects and up to C-level executives What Gives You An Edge Technical certifications - cloud developer/architect, data engineering & integration Familiarity with solution or value Selling A challenger mindset and an ability to positively influence peoples’ opinions Come As You Are At Confluent, equality is a core tenet of our culture. We are committed to building an inclusive global team that represents a variety of backgrounds, perspectives, beliefs, and experiences. The more diverse we are, the richer our community and the broader our impact. Employment decisions are made on the basis of job-related criteria without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, veteran status, or any other classification protected by applicable law. Click HERE to review our Candidate Privacy Notice which describes how and when Confluent, Inc., and its group companies, collects, uses, and shares certain personal information of California job applicants and prospective employees. Show more Show less

Posted 1 month ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

NUVEM Labs is expanding! We’re looking for 2 seasoned professionals to lead a telco cloud deployment project for 5G CNFs on Red Hat OpenShift (Bare Metal) . If you’re ready to work on the frontlines of telecom transformation, this opportunity is for you. What You’ll Be Doing Design and deploy Red Hat OpenShift infrastructure Onboard & validate OEM's CNFs (VCU-AUPF, VCU-ACPF) Prepare and deliver HLD/LLD, as-built docs, test reports & KT sessions Ensure optimized configuration (NUMA, SR-IOV, DPDK, Multus) Lead integration, functional & HA testing Interface with customers and drive handover Skills Required Deep expertise in Kubernetes/OpenShift (must) Hands-on CNF deployment experience (Samsung, Nokia, Ericsson, etc.) Good understanding of 5G Core functions (UPF, PCF, etc.) Familiarity with YAML, Helm, GitOps Excellent communication & documentation skills 🎓 Preferred Certifications Red Hat OpenShift, CKA/CKAD , or Telecom CNF certifications Location: Gurgaon (with project-based travel) Start Date: Immediate joiners preferred Interested? Send your profile to [samalik@nuvemlabs.in] or DM me directly. Join NUVEM Labs and shape the future of cloud-native telecom infrastructure. Show more Show less

Posted 1 month ago

Apply

5.0 years

0 Lacs

Goa, India

On-site

OPTEL. Responsible. Agile. Innovative. OPTEL is a global company that develops transformative software, middleware and hardware solutions to secure and ensure supply chain compliance in major industry sectors such as pharmaceuticals and food, with the goal of reducing the effects of climate change and enabling sustainable living. If you are driven by the desire to contribute to a better world while working in a dynamic and collaborative environment, then you've come to the right place! Full Stack Developer (Javascript + Mobile Dev + .NET) Summary We are seeking a passionate and highly skilled Full Stack Developer to drive the design, development, and optimization of modern, cloud-hosted SaaS applications. You will be responsible for full solution delivery—from architecture to deployment—leveraging technologies such as C#/.NET Core, Node.js, React.js, and cloud platforms like Google Cloud Platform (GCP) and AWS. The ideal candidate embraces a DevSecOps mindset, contributes to AI/ML integrations, and thrives on building secure, scalable, and innovative solutions alongside cross-functional teams. Architecture & System Design Architect and design scalable, secure, and cloud-native applications. Establish technical best practices across frontend, backend, mobile, and cloud components. Contribute to system modernization efforts, advocating for microservices, serverless patterns, and event-driven design. Integrate AI/ML models and services into application architectures. Application Development Design, develop, and maintain robust applications using C#, ASP.NET Core, Node.js, and React.js. Build cross-platform mobile applications with React Native or .NET MAUI. Develop and manage secure RESTful and GraphQL APIs. Utilize Infrastructure as Code (IaC) practices to automate cloud deployments. Cloud Development & DevSecOps Build, deploy, and monitor applications on Google Cloud and AWS platforms. Implement and optimize CI/CD pipelines (GitHub Actions, GitLab, Azure DevOps). Ensure solutions align with security best practices and operational excellence (DevSecOps principles). AI Development and Integration Collaborate with AI/ML teams to design, integrate, and optimize intelligent features. Work with AI APIs and/or custom AI models. Optimize AI workloads for scalability, performance, and cloud-native deployment. Testing, Automation, and Monitoring Create unit, integration, and E2E tests to maintain high code quality. Implement proactive measures to reduce technical debt. Deploy monitoring and observability solutions. Agile Collaboration Work in Agile/Scrum teams, participating in daily standups, sprint planning, and retrospectives. Collaborate closely with product managers, UX/UI designers, and QA engineers. Share knowledge and actively contribute to a strong, collaborative engineering culture. Skills And Qualifications Required 5+ years experience in Full Stack Development (C#, .NET Core, Node.js, JavaScript/TypeScript). Solid frontend development skills with React.js (Vue.js exposure is a plus). Experience with multi-platform mobile app development (React Native or .NET MAUI). Expertise with Google Cloud Platform (GCP) and/or AWS cloud services. Hands-on experience developing and consuming RESTful and GraphQL APIs. Strong DevOps experience (CI/CD, Infrastructure as Code, GitOps practices). Practical experience integrating AI/ML APIs or custom models into applications. Solid relational and cloud-native database skills (Postgres, BigQuery, DynamoDB). Serverless development (Cloud Functions, AWS Lambda). Kubernetes orchestration (GKE, EKS) and containerization (Docker). Event streaming systems (Kafka, Pub/Sub, RabbitMQ). AI/ML workflow deployment (Vertex AI Pipelines, SageMaker Pipelines). Edge Computing (Cloudflare Workers, Lambda@Edge). Experience with ISO/SOC2/GDPR/HIPAA compliance environments. Familiarity with App Store and Google Play Store deployment processes. EQUAL OPPORTUNITY EMPLOYER OPTEL is an equal opportunity employer. We believe that diversity is essential for fostering innovation and creativity. We welcome and encourage applications from individuals of all backgrounds, cultures, gender identities, sexual orientations, abilities, ages, and beliefs. We are committed to providing a fair and inclusive recruitment process, where each candidate is evaluated solely on their qualifications, skills, and potential. At OPTEL, every employee's unique perspective contributes to our collective success, and we celebrate the richness that diversity brings to our team. See the offer on Jazzhr Show more Show less

Posted 1 month ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: VP-Digital Expert Support Lead Experience : 15 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 15+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums Show more Show less

Posted 1 month ago

Apply

1.0 - 4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Software Development Engineer in Test About Trellix: Trellix, the trusted CISO ally, is redefining the future of cybersecurity and soulful work. Our comprehensive, GenAI-powered platform helps organizations confronted by today’s most advanced threats gain confidence in the protection and resilience of their operations. Along with an extensive partner ecosystem, we accelerate technology innovation through artificial intelligence, automation, and analytics to empower over 53,000 customers with responsibly architected security solutions. We also recognize the importance of closing the 4-million-person cybersecurity talent gap. We aim to create a home for anyone seeking a meaningful future in cybersecurity and look for candidates across industries to join us in soulful work. More at https://www.trellix.com/. Role Overview: Trellix is looking for quality engineers who are self-driven and passionate to work on on-prem/cloud products that cover SIEM, EDR, and XDR technologies. This job involves manual, automated testing (including automation development), non-functional (performance, stress, soak), security testing and much more. Work smartly by using cutting edge technologies and AI driven solutions. About the role: Champion a quality-first mindset throughout the entire software development lifecycle. Develop and implement comprehensive test strategies and plans for a complex hybrid application, considering the unique challenges of both on-premise and cloud deployments. Collaborate with architects and development teams to understand system architecture, design, and new features to define optimal test approaches. Peruse the requirements documents thoroughly and thus design relevant test cases that cover new product functionality and the impacted areas. Design, develop, and maintain robust, scalable, and high-performance automated test frameworks and tools from scratch, utilizing industry-standard programming languages (e.g., Python, Java, Go). Manage and maintain test environments, including setting up and configuring both on-premise and cloud instances for testing. Execute new feature and regression cases manually, as needed for a product release. Familiarity with bug tracking platforms such as JIRA, Bugzilla, etc. is essential. Filing defects effectively, i.e., noting all the relevant details that reduce the back-and-forth, and aids quick turnaround with bug fixing, is an essential trait for this job Identify cases that are automatable, and within this scope, segregate cases with high ROI from low-impact areas to improve testing efficiency Analyze test results, identify defects, and work closely with development teams to ensure timely resolution. Willing to explore and increase understanding on Cloud/ On-prem infrastructure About you: 1-4 years of experience in an SDET role with a relevant degree in Computer Science or Information Technology is required Show ability to quickly learn a product or concept, viz., its feature set, capabilities, and functionality. Solid fundamentals in any programming language (preferably, Python or go) and OOPS concepts. Also, hands-on with any of the popular CI/CD tools such as Teamcity, Jenkins or similar is a must RESTful API testing using tools such as Postman or similar is a must Familiarity and exposure to AWS and its offerings, such as, S3, EC2, EBS, EKS, IAM, etc., is required. Exposure to Docker, Helm, GitOps is an added advantage. Extensive experience designing, developing, and maintaining automated test frameworks (e.g., Playwright, Selenium, Cypress, TestNG, JUnit, Pytest). Experience with API testing tools and frameworks (e.g., Postman, Rest Assured, OpenAPI/Swagger). Good foundational knowledge in working on Linux based systems. This includes setting up git repos, user management, network configurations, use of package managers, etc. Hands-on with functional and non-functional testing, such as, performance and load, is desirable. Any level of proficiency with prometheus, grafana, service metrics, would be nice to have Understanding of cyber security concepts would be helpful. Company Benefits and Perks: We believe that the best solutions are developed by teams who embrace each other's unique experiences, skills, and abilities. We work hard to create a dynamic workforce where we encourage everyone to bring their authentic selves to work every day. We offer a variety of social programs, flexible work hours and family-friendly benefits to all of our employees. Retirement Plans Medical, Dental and Vision Coverage Paid Time Off Paid Parental Leave Support for Community Involvement We're serious about our commitment to a workplace where everyone can thrive and contribute to our industry-leading products and customer support, which is why we prohibit discrimination and harassment based on race, color, religion, gender, national origin, age, disability, veteran status, marital status, pregnancy, gender expression or identity, sexual orientation or any other legally protected status. Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Greater Kolkata Area

Remote

Role : Engineering Manager Location : Remote Experience : 10 to 14 Years Company Type : Fast-growing AI/Data-driven the Role : We are hiring a hands-on Engineering Manager to lead engineering delivery, architecture, and release management for our AI-powered SaaS platform. Youll drive agile execution, technical design, and customer support escalations, while collaborating with product, QA, and DevOps teams. Agile Delivery & Backlog Management If you are passionate about scaling cloud-native products, reducing technical debt, and building a high-performance team this is your Responsibilities : Lead sprint planning, backlog grooming, and feature prioritization. Drive delivery of Business-as-Usual (BAU) features with strong agile execution. Monitor velocity, burndown, and sprint health Planning & CI/CD : Manage code progression from development to production. Drive release pipelines with CI/CD, security, and performance compliance. Reduce downtime risks through robust DevOps Escalations & Support : Own engineering escalations, ensuring SLA compliance and root cause resolution. Reduce customer issues through preventive measures and & Technical Leadership : Own application architecture decisions and conduct design/code reviews. Guide the team on microservices, REST APIs, and scalable system designs. Lead modernization efforts and reduce technical KPIs : Sprint Delivery : 90%+ of planned BAU items released on time. Product Stability : Reduce critical escalations by 20% QoQ. Quality : <5% defect escape rate in production. Availability : 99.9% uptime of the SaaS Qualifications : 10+ years in software development, with 3+ years in engineering management. Strong track record delivering SaaS/data platforms in agile environments. Experience with CI/CD, cloud (AWS/Azure), and secure release pipelines. Deep understanding of application architecture, microservices, and APIs. Excellent communication, mentoring, and cross-functional leadership Stack/Tools : Languages : Python, Java, JavaScript Tools : Jira, Jenkins, Confluence DevOps : CI/CD, Docker, Kubernetes (preferred) Security : Familiarity with OWASP, SOC2, performance : Startup experience with AI/Data-centric product delivery Familiarity with modern infrastructure tools : Terraform, Helm, GitOps Prior success managing customer-facing platforms with SLA/uptime requirements (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

6.0 - 8.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Role : Lead Cloud DevOps Engineer Location : Jaipur Company : Kogta Financial (India) Limited We are seeking an experienced and forward-thinking Lead Cloud DevOps Engineer to join our growing technology team in Jaipur. This is a leadership opportunity for someone passionate about cloud-native technologies, automation, and scalable infrastructure to make a tangible impact in a fast-growing financial institution. As a DevOps leader, you will play a crucial role in shaping our DevOps roadmap, advancing our infrastructure architecture, and enhancing our software delivery lifecycle. Youll collaborate with cross-functional teams, bring in best practices, and drive engineering excellence across the organization. Key Responsibilities Lead the strategy, design, and implementation of scalable and secure cloud-native infrastructure on AWS. Architect and manage Kubernetes clusters, container orchestration (Helm, Docker), and deployment strategies. Build and enhance CI/CD pipelines using tools like Jenkins, GitHub Actions, and GitOps workflows. Champion Infrastructure as Code (IaC) practices using Terraform, CloudFormation, or equivalent tools. Drive observability and reliability improvements using monitoring and tracing tools such as Prometheus, Grafana, Loki, ELK, or OpenTelemetry. Identify and automate manual processes to streamline application delivery and infrastructure provisioning. Collaborate with engineering and product teams to support high availability, performance, and security goals. Coach, mentor, and guide internal teams on DevOps best practices, culture, and emerging technologies. Ideal Candidate Profile 6 to 8 years of relevant DevOps and cloud infrastructure experience. Deep hands-on expertise in AWS, including networking, IAM, VPC, ECS/EKS, and serverless components. Proven experience with Kubernetes, Helm, Docker, and container security practices. Strong proficiency in IaC tools (Terraform preferred) and CI/CD systems (e.g., Jenkins, GitHub Actions, ArgoCD). Proficient in scripting and automation using Bash, Python, or Groovy. Experience setting up robust monitoring, logging, and alerting frameworks. Strong understanding of DevSecOps principles and secure coding/deployment practices. Excellent communication, problem-solving, and stakeholder management skills. Prior experience in financial services or regulated environments is a plus. (ref:hirist.tech) Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values : At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities United Airlines is seeking talented people to join the Data Engineering team. Data Engineering organization is responsible for driving data driven insights & innovation to support the data needs for commercial and operational projects with a digital focus. You will work as a Senior Engineer - Machine Learning and collaborate with data scientists and data engineers to: Build high-performance, cloud-native machine learning infrastructure and services to enable rapid innovation across United Build complex data ingestion and transformation pipelines for batch and real-time data Support large scale model training and serving pipelines in distributed and scalable environment This position is offered on local terms and conditions within United’s wholly owned subsidiary United Airlines Business Services Pvt. Ltd. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Qualifications Required BS/BA, in Advanced Computer Science, Data Science, Engineering or related discipline or Mathematics experience required Strong software engineering experience with Python and at least one additional language such as Go, Java, or C/C++ Familiarity with ML methodologies and frameworks (e.g., PyTorch, Tensorflow) and preferably building and deploying production ML pipelines Experience developing cloud-native solutions with Docker and Kubernetes Cloud-native DevOps, CI/CD experience using tools such as Jenkins or AWS CodePipeline; preferably experience with GitOps using tools such as ArgoCD, Flux, or Jenkins X Experience building real-time and event-driven stream processing pipelines with technologies such as Kafka, Flink, and Spark Experience setting up and optimizing data stores (RDBMS/NoSQL) for production use in the ML app context Strong desire to stay aligned with the latest developments in cloud-native and ML ops/engineering and to experiment with and learn new technologies Experience 3 + years of software engineering experience with languages such as Python, Go, Java, Scala, Kotlin, or C/C++ 2 + years of experience working in cloud environments (AWS preferred) 2 + years of experience with Big Data technologies such as Spark, Flink 2 + years of experience with cloud-native DevOps, CI/CD At least one year of experience with Docker and Kubernetes in a production environment Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English and Hindi (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position Preferred Masters in computer science or related STEM field GGN00001744 Show more Show less

Posted 1 month ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll build a wide network of stakeholders of varying levels of seniority It’s a chance to hone your existing technical skills and advance your career We're offering this role as associate level What you'll do In your new role, you’ll engineer and maintain innovative, customer centric, high performance, secure and robust solutions. We are seeking a highly skilled and motivated AWS Cloud Engineer with deep expertise in Amazon EKS, Kubernetes, Docker, and Helm chart development. The ideal candidate will be responsible for designing, implementing, and maintaining scalable, secure, and resilient containerized applications in the cloud. You’ll Also Be Design, deploy, and manage Kubernetes clusters using Amazon EKS. Develop and maintain Helm charts for deploying containerized applications. Build and manage Docker images and registries for microservices. Automate infrastructure provisioning using Infrastructure as Code (IaC) tools (e.g., Terraform, CloudFormation). Monitor and troubleshoot Kubernetes workloads and cluster health. Support CI/CD pipelines for containerized applications. Collaborate with development and DevOps teams to ensure seamless application delivery. Ensure security best practices are followed in container orchestration and cloud environments. Optimize performance and cost of cloud infrastructure. The skills you'll need You’ll need a background in software engineering, software design, architecture, and an understanding of how your area of expertise supports our customers. You'll need experience in Java full stack including Microservices, ReactJS, AWS, Spring, SpringBoot, SpringBatch, Pl/SQL, Oracle, PostgreSQL, Junit, Mockito, Cloud, REST API, API Gateway, Kafka and API development. You’ll Also Need 3+ years of hands-on experience with AWS services, especially EKS, EC2, IAM, VPC, and CloudWatch. Strong expertise in Kubernetes architecture, networking, and resource management. Proficiency in Docker and container lifecycle management. Experience in writing and maintaining Helm charts for complex applications. Familiarity with CI/CD tools such as Jenkins, GitLab CI, or GitHub Actions. Solid understanding of Linux systems, shell scripting, and networking concepts. Experience with monitoring tools like Prometheus, Grafana, or Datadog. Knowledge of security practices in cloud and container environments. Preferred Qualifications: AWS Certified Solutions Architect or AWS Certified DevOps Engineer. Experience with service mesh technologies (e.g., Istio, Linkerd). Familiarity with GitOps practices and tools like ArgoCD or Flux. Experience with logging and observability tools (e.g., ELK stack, Fluentd). Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details Job Description Enterprise Infrastructure Services (EIS) at Chubb is focussed at delivering services across multiple disciplines at Chubb. Cloud Engineering is one of the key services that are responsible for delivering cloud-based services on-prem as well as off prem. As part of the continued transformation, Chubb is increasing the pace of application transformation into containers and cloud adoption. As such we are seeking an experienced Cloud Engineer who can be part of this exciting journey at Chubb. As an experienced, hands-on cloud engineer, you will be responsible for both Infrastructure automation and container platform adoption at Chubb. A successful candidate would have hands-on experience of container platforms (Kubernetes), cloud platforms (Azure), and experience with software development and DevOps enablement through automation and Infrastructure as code. Successful candidate will also have opportunity to build and innovate solutions around various Infrastructure problems right from developer experience to operational excellence across the services provided by the cloud engineering team. Responsibilities Work on the cloud transformation projects across cloud engineering to provide automation and self service Implement automation and self-service capabilities using CI/CD pipelines for Infrastructure Write and maintain Terraform based Infrastructure as Code Build Operational capabilities around the Cloud platform for handing over to Operations after release Document and Design controls and governance policies around Azure platform and automate deployments of the policies Manage the end user collaboration, conduct regular sessions to educate end users on services and automation capabilities Find opportunities for automating away manual tasks Attend escalations from support teams and providing assistance during major production issues from an engineering perspective Key Requirements Experience with large cloud transformation projects preferably in Azure Extensive experience with cloud platforms, mainly Azure. Strong understanding of Azure services with demonstrated experience in AKS, App Services, LogicApps, IAM, Loadbalancers, AppGateway, NSG, storage and Azure Key Vault. Knowledge of networking concepts and protocols, including VNet, DNS, and load balancing. Writing Infrastructure as Code and pipelines preferably using Terraform, Ansible, Bash, Python and Jenkins Have written and executed Terraform based Infrastructure as Code Ability to work in both windows and Linux environment with Container platform such as Kubernetes, AKS, GKE DevOps experience with ability to use Github, Jenkins and Nexus for pipeline automation and artifact management Implementation experience of secure transports using TLS and encryption along with authentication/authorization flow Experience in certificate management for containerized applications Experience with Jenkins and similar CICD tools. Experience in GitOps would be an added advantage Good to have Python coding experience in automation or any area. Education and Qualification Bachelors degree in Computer Science, Computer Engineering, Information Technology or relevant field Minimum of 8 years of experience in IT automation with 2 years supporting Azure based cloud automation and 2 years of Kubernetes and Docker Relevant Azure Certifications · Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 1 month ago

Apply

3.0 years

3 - 15 Lacs

Nāgpur

On-site

Key Responsibilities: Design, implement, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI, or Azure DevOps. Automate infrastructure deployment using tools such as Terraform, Ansible, or CloudFormation. Work with cloud platforms (AWS, Azure, GCP) to manage services, resources, and configurations. Develop and maintain Docker containers and manage Kubernetes clusters (EKS, AKS, GKE). Monitor application and infrastructure performance using tools like Prometheus, Grafana, ELK, or CloudWatch. Collaborate with developers, QA, and other teams to ensure smooth software delivery and operations. Troubleshoot and resolve infrastructure and deployment issues in development, staging, and production. Maintain security, backup, and redundancy strategies for critical infrastructure. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field. 3 to 5 years of experience in a DevOps role. Experience with one or more cloud platforms: AWS, Azure, or GCP. Proficiency in scripting languages: Bash, Python, or PowerShell. Hands-on experience with containerization (Docker) and orchestration (Kubernetes). Experience with configuration management and Infrastructure as Code tools. Solid understanding of networking, firewalls, load balancing, and monitoring. Strong analytical and troubleshooting skills. Good communication and collaboration abilities. Azure, Docker, Kubernetes, Terraform, Jenkins, CI/CD Pipelines, Linux, Git Preferred Qualifications: Certifications in AWS, Azure, Kubernetes, or related DevOps tools. Familiarity with GitOps practices. Exposure to security best practices in DevOps. Job Type: Full-time Pay: ₹390,210.46 - ₹1,566,036.44 per year Benefits: Health insurance Provident Fund Schedule: Rotational shift Work Location: In person Speak with the employer +91 8369431086

Posted 1 month ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Company Description AJA Consulting Services LLP, founded by Phaniraj Jaligama, is committed to empowering youth and creating employment opportunities in both IT and non-IT sectors. With a focus on skill development, AJA provides exceptional resource augmentation, staffing solutions, interns pool management, and corporate campus engagements for a diverse range of clients. Through its flagship CODING TUTOR platform, AJA trains fresh graduates and IT job seekers in full-stack development, enabling them to transition seamlessly into industry roles. Based in Hyderabad, AJA operates from a state-of-the-art facility in Q City. Role Description We're hiring a Senior DevOps/Site Reliability Engineer with 5–6 years of hands-on experience in managing cloud infrastructure, CI/CD pipelines, and Kubernetes environments. You’ll also mentor junior engineers and lead real-time DevOps initiatives. 🔧 What You’ll Do *Build and manage scalable, fault-tolerant infrastructure (AWS/GCP/Azure) *Automate CI/CD with Jenkins, Github Actions or CircleCI *Work with IaC tools: Terraform, Ansible, CloudFormation *Set up observability with Prometheus, Grafana, Datadog *Mentor engineers on best practices, tooling, and automation ✅ What You Bring *5–6 years in DevOps/SRE roles *Strong scripting (Bash/Python/Go) and automation skills *Kubernetes & Docker expertise *Experience in production monitoring, alerting, and RCA *Excellent communication and team mentorship skills 💡 Bonus: GitOps, Service Mesh, ELK/EFK, Vault 📩 Apply now by emailing your resume to a.malla@ajacs.in Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Kochi, Kerala, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Greater Bhopal Area

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply

7.0 years

40 Lacs

Indore, Madhya Pradesh, India

Remote

Experience : 7.00 + years Salary : INR 4000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: MatchMove) (*Note: This is a requirement for one of Uplers' client - MatchMove) What do you need for this opportunity? Must have skills required: Gen AI, AWS data stack, Kinesis, open table format, Pyspark, stream processing, Kafka, MySQL, Python MatchMove is Looking for: Technical Lead - Data Platform Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities:: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points:: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. Engagement Model: : Direct placement with client This is remote role Shift timings ::10 AM to 7 PM How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies