Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
India
Remote
We’re Hiring | Senior Python Engineer – Python (FastAPI, Kafka, GCP) Experience: 6+ Years Location: 100% Remote Availability: Immediate to 15 Days Job Type: Full-time We’re looking for an experienced Senior Python Engineer with deep expertise in Python, FastAPI, Kafka, GCP , and cloud-native microservices . If you're passionate about building high-performance APIs and event-driven systems with best-in-class observability and DevOps practices, we’d love to connect with you! Key Responsibilities Microservices & API Development Design and implement scalable microservices using FastAPI, Pydantic, and Async I/O Build high-throughput, event-driven services leveraging Confluent Kafka DevOps & CI/CD Implement and manage CI/CD pipelines using GitHub Actions (or similar) Deploy secure, containerized applications using Docker, Terraform, and GCP services like Cloud Run, Pub/Sub, and Eventarc Monitoring & Observability Integrate monitoring/logging tools such as New Relic, Cloud Logging, and Cloud Monitoring Ensure visibility, performance tuning, and reliability in production environments Team Leadership & Best Practices Define coding standards, enforce testing strategies (Pytest/unittest), conduct code reviews Mentor junior developers and ensure high code quality across the board Data Processing & Transformation Work with caching (Redis), data transformation (XSLT, XML/XSD), and various database systems Handle structured/unstructured data with Kafka in JSON, Avro, and Protobuf formats Required Skills Expert in Python with strong FastAPI experience Proficient in Async I/O, Pydantic, and Pytest/unittest Hands-on experience with Kafka (Confluent), Docker, and Terraform Cloud experience (preferably GCP) with services like Cloud Run, Pub/Sub, Eventarc Monitoring with New Relic or equivalent observability tools Knowledge of XML/XSD/XSLT transformations Familiarity with caching and database/storage systems (PostgreSQL, Redis, etc.) Git/GitHub and CI/CD with GitHub Actions Good to Have Experience with PyPI package creation, Tox, Ruff Exposure to GKE Autopilot, Artifact Registry, IAM, Secret Manager Understanding of Workload Identity Federation and VPC networking Bash scripting and familiarity with legacy systems like IBM AS/400 or MS SQL Server Soft Skills Strong leadership and mentoring ability Excellent collaboration and communication skills Analytical thinking and a drive for continuous improvement Benefits Competitive compensation Remote-first work culture and flexible hours Access to continuous learning resources Opportunity to work on cutting-edge cloud-native architecture A collaborative, high-performing tech team Interested? 📧 Send your resume to shailee.behure@bluetickconsultants.com or message us directly on LinkedIn. 🤝 Referrals are highly appreciated! #PythonJobs #FastAPI #Kafka #CloudNative #GCP #RemoteJobs #ImmediateJoiners #DevOps #Microservices #BackendEngineer #CloudRun #EventDrivenArchitecture #NewRelic #GitHubActions #WFH
Posted 18 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Join our digital revolution in NatWest Digital X In everything we do, we work to one aim. To make digital experiences which are effortless and secure. So we organise ourselves around three principles: engineer, protect, and operate. We engineer simple solutions, we protect our customers, and we operate smarter. Our people work differently depending on their jobs and needs. From hybrid working to flexible hours, we have plenty of options that help our people to thrive. This role is based in India and as such all normal working days must be carried out in India. Job Description Join us as an Environment Analyst In this key role, you’ll support a set of environments used by a platform’s applications and assist as a point of contact for environment related activities We’ll look to you to assist environment managers in preparing a rolling environmental strategy which considers risks in relation to stability and resilience You’ll be joining a collaborative and supportive team, and have the opportunity to work with a range of stakeholders across the bank We're offering this role at associate vice president level What you'll do We’re looking for an Environment Analyst to enable the successful implementation of platform change and deliver customer value by assisting environment managers to deliver complex and critical environment related activities. You’ll help create stories and features for the domain backlog to enable a continual progression of changes and look for ways to improve efficiency, resilience, reliability, quality and manual inconsistency by increasing the use of automation and virtualisation. Your responsibilities will also include: Supporting environment managers in managing a set of non-production environments and maintaining non-production environment dashboards Understanding and maintaining a focus on customer value and providing a positive customer experience Continually looking for ways to increase speed, efficiency, quality, resilience and reliability by introducing automated and virtualised environments Working with environment managers to understand the upcoming flow of work and the customer vision in order to contribute to a fast response to environment related needs Working with a range of stakeholders across the bank and third party suppliers to make sure that platform environments are optimised The skills you'll need We’re looking for a capable communicator, with knowledge of scaled Agile and Prince 2 tools and methodologies spanning value stream, portfolio, platform and feature team levels. You’ll also need technical knowledge, including platform, technology, products and domains and experience in multiple languages or technical domains. We’ll also expect: Eight to twelve years of experience in architect and provision secure, scalable infrastructure in AWS using IaC such as Terraform, CloudFormation Implement and manage deployment strategies such as blue/green, canary, rolling to support high-availability Automate environment setup, configuration management, and application deployments across development, staging, and production Proven experience with GitLab CI/CD, AWS services such as EC2, ECS/EKS, S3, IAM, CloudWatch and PCF Solid understanding of DevOps best practices, including observability, security, and scalability
Posted 18 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Design, deploy and manage Azure infrastructure including virtual machines, storage accounts, virtual networks, and other resources. Assist teams by deploying applications to AKS clusters using containerization technologies such as Docker and Kubernetes, Container Registry, etc.. Familiarity with the Azure CLI and ability to use PowerShell say to scan Azure resources, make modifications, spit out a report or a dump, etc.. Setting up a 2 or 3 tier application on Azure. VMs, web apps, load balancers, proxies, etc.… Well versed with security, AD, MI SPN, firewalls Networking: NSGs, VNETs, private end points, express routes, Bastion, etc.… Familiarity with a scripting language like Python for automation. Leveraging Terraform (or Bicep) for automating infrastructure deployment. Cost tracking, analysis, reporting, and management at the resource groups level. Experience with Azure DevOps Experience with Azure monitor Strong hands-on experience in ADF, Linked Service/IR (Self-hosted/managed), LogicApp, ServiceBus, Databricks, SQL Server Strong understanding of Python, Spark, and SQL (Nice to have) Ability to work in fast paced environments as we have tight SLAs for tickets. Self-driven and should possess exploratory mindset as the work requires a good amount of research (within and outside the application)
Posted 18 hours ago
8.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Requirements: • A bachelor’s degree in computer science, computer engineering, or similar. • A minimum of 8+ years' experience in similar role. • Minimum of 5 years’ experience in python development. • Experience in automation scripting and python test frameworks. • Implementing various development, testing, automation tools, and IT infrastructure • Setting up tools and required infrastructure. • Defining and setting development, test, release, update, and support processes for DevOps Operation. • Encouraging and building automated processes wherever possible • Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) • Experience in GitLab, Jira, AWS/Azure, Grafana • Experience in Terraform and Helm chart development and implementation. • Experience working with embedded product development environments is added advantage.
Posted 18 hours ago
10.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from Tata Consultancy Services!! We are hiring GCP Data Engineer ! Position: GCP Data Engineer Job Location: Chennai/Bangalore/Hyderabad/Gurugram/Pune Experience : 10-12 years Interested professionals kindly apply through the link. Must Have: Proficiency in programming languages: Python, Java - Expertise in data processing frameworks: Apache Beam (Data Flow) - Active experience on GCP tools and technologies like Big Query, Dataflow, Cloud Composer, Cloud Spanner, GCS, DBT etc., - Data Engineering skillset using Python, SQL - Experience in ETL (Extract, Transform, Load) processes - Knowledge of DevOps tools like Jenkins, GitHub, Terraform is desirable. Should have good knowledge on Kafka (Batch/ streaming) - Understanding of Data models and experience in performing ETL design and build, database replication using Message based CDC - Familiarity with cloud storage solutions - Strong problem-solving abilities in data engineering challenges - Understanding of data security and scalability - Proficiency in relevant tools like Apache Airflow Referrals are always welcome!
Posted 18 hours ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Join us as an Infrastructure Analyst Hone your analytical skills as you provide support to ensure the operational health of the platform, covering all aspects of service, risk, cost and people You’ll be supporting the platforms’ operational stability and technology performance, including maintaining any system’s utilities and tools provided by the platform This is an opportunity to learn a variety of new skills in a constantly evolving environment, working closely with feature teams to continuously enhance your development We're offering this role at associate level What you'll do As an Infrastructure Analyst, you’ll be providing input to and supporting the team’s activities to make sure that the platform integrity is maintained in line with technical roadmaps, while supporting change demands from domains or centres of excellence. You’ll be supporting the delivery of a robust production management service for relevant infrastructure platforms. In addition, you’ll be contributing to the delivery of customer outcomes, innovation and early learning by contributing to test products and services to identify early on if they are viable and deliver the desired outcomes. Your Role Will Involve Contributing to the platform risk culture, making sure that risks are discussed and understood at every step, and effectively collaborating to mitigate risk Contributing to the planning and execution of work within the platform and the timely servicing of feature development requests from cross platform initiatives, and supporting the delivery of regulatory reporting Participating and seeking out opportunities to simplify the platform infrastructure, architecture, services and customer solutions, guarding against introducing new complexity Building relationships with platform, domain and relevant cross-domain stakeholders Making sure that controls are applied and constantly reviewed, primarily against SOX, to ensure full compliance to all our policies and regulatory obligations The skills you'll need To succeed in this role, you’ll need to be a very capable communicator with the ability to communicate complex technical concepts clearly to colleagues, including management level. You’ll need a solid background working in an Agile or DevOps environment with continuous delivery and continuous integration. We’ll Also Look To You To Demonstrate At least four years of experience in Database Administration primarily MSSQL Server Experience of Administration and Managing of MSSQL Database Instances Knowledge in Unix, Windows Operating systems Knowledge On PowerShell, T-SQL’s language, technologies such as GitLab, Python, Terraform and AWS
Posted 18 hours ago
10.0 - 15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Dear Candidates, Greetings from TCS!!!! TCS is looking for AWS Cloud Engineer Experience: 10 -15 Years of Experience Location: Chennai / Hyderabad Requirements : AWS Cloud Engineer Cloud Infrastructure: AWS services: EC2, S3, VPC, IAM, Lambda, RDS, Route 53, ELB, CloudFront, Auto Scaling Serverless architecture design using Lambda, API Gateway, and DynamoDB Containerization: Docker and orchestration with ECS or EKS (Kubernetes) Infrastructure as Code (IaC): Terraform (preferred), AWS CloudFormation Hands-on experience creating reusable modules and managing cloud resources via code Automation & CI/CD: Jenkins, GitHub Actions, GitLab CI/CD, AWS CodePipeline Automating deployments and configuration management Scripting & Programming: Proficiency in Python, Bash, or PowerShell for automation and tooling · Monitoring & Logging: o CloudWatch, CloudTrail, Prometheus, Grafana, ELK stack · Networking: o VPC design, Subnets, NAT Gateway, VPN, Direct Connect, Load Balancing o Security Groups, NACLs, and route tables · Security & Compliance: o IAM policies and roles, KMS, Secrets Manager, Config, GuardDuty o Implementing encryption, access controls, and least privilege policies
Posted 18 hours ago
8.0 years
0 Lacs
Karnataka, India
On-site
Description: To deliver and maintain IT-applications and –services in order to realize the Bank strategy in the field of information technology. Engineers in this job category work in an agile way, in squads to deliver short-cycle full-fledged IT products. The DevOps Engineer is responsible for the system to work as front liners. Their role is to define strategy and lead the implementation of DevSecOps pipelines with the bank's digital and non-digital journeys. Orchestrates build and release pipelines and ensure seamless application promotion for all the digital squads. Provide necessary support to Agile coach, scrum master, development squads and automate complete rollouts including non-production and production for all the applications which includes APIs, Database promotions as well. Also ensure the container platform configuration setup and availability in azure cloud environment. • Uses his/her technical expertise and experience to contribute to all sprint events (planning, refinements, retrospectives, demos) • Consults with the team about what is needed to fulfil the functional and non-functional requirements of the IT product to be build and released • Define and orchestrate the DevOps for the IT product, Enable automated the unit test in line with the customer’s wishes and IT area’s internal ambitions and reviews colleagues’ IT products. • Define, designs and enable automated builds and automated tests IT products (functional, performance, resilience and security tests). • Performs Life Cycle Management (including decommissioning) for IT products under management • Define and Improves the Continuous Delivery process • Sets up the IT environment, deploys the IT product on the IT infrastructure and implements the required changes • Sets up monitoring of IT product usage by the customer Operating Environment, Framework and Boundaries, Working Relationships • Works within a multidisciplinary team or in an environment in which multidisciplinary teamwork is carried out. • Is primarily responsible for the automated non-production and production rollouts (or technical configuration) of software applications. • The range of tasks includes the following: o The analysis and design of the DevOps solution for any application (or the technical configuration); o Coding and review the pipelines and/or package integration in one programming languages, scripting languages and frameworks: Azure/AWS/Cloud Pak DevOps services Pipeline creation using the templates and enhance the existing templates based on the needs Integration with various DevOps tools like SonarQube, Veracode, Twistlock, Ansible, Terraform, hashicorp vault. Azure test plan setup and configuration with pipelines Cloud based deployments for Springboot Java, reactjs, nodejs, .Net core and using native K8 and AKS/EKS/OpenShift Experience in setting up Kubernetes cluster with ingress controllers (nginx and nginx+) Experience in python, shell scripting. Logging and monitoring using Splunk, EFK and ELK Middleware on-premises automated deployments for WebSphere, Jboss, BPM, IIS and IIB Expert in OS - RHEL, CentOS, Ubuntu Experience in Liquibase/Flyway for DB automations. o Basic application development knowledge for cloud native and traditional apps o API Gateway ad API deployments Database systems, with knowledge of SQL and NoSQL stores (e.g. MySQL, Oracle, MongoDB, Couchbase, etc.) o Continuous Delivery (Compile, Build, Package, Deploy); o Test-Driven Development (TDD) and test automation (e.g. regression, functional and integration tests); debugging and profiling; o software configuration management and version control. o Work in an agile/scrum environment, meeting sprint commitments and contributing to the agile process o Maintain traceability of testing activities REQUIREMENTS: • 8 to 10 years of overall experience and 6 to 7 years’ experience as a DevOps Engineer in defining the solution and implementing it common on-premises and cloud platforms with scripting languages and frameworks expertise • Expert in Azure DevOps services • Hands-on in Pipeline creation using the templates and enhance the existing templates based on the needs • Able to perform Integration by coding the templates with various DevOps tools like SonarQube, Veracode, Twistlock, Ansible, Terraform, hashicorp vault and UCD. • Implement Azure test plan setup and configuration with pipelines • Automate cloud-based deployments for Springboot Java, reactjs, nodejs, .Net core and using native K8 and AKS/EKS/OpenShift • Experience in setting up Kubernetes cluster with ingress controllers (nginx and nginx+) • Expert in Python/shell scripting • Expert in DB automation tools (Flyway/Liquibase) • Experience in Azure files and synz solution implementation • Experience in Logging and mentoring using Splunk, EFK and ELK • Experience in Middleware on-premises automated deployments for WebSphere, Jboss, BPM, IIS and IIB • Expert in OS - RHEL, CentOS, Ubuntu • Knowledge on IBM cloudpak using redhat OpenShift • Basic application development knowledge for cloud native and traditional apps • Experience in API Gateway ad API deployments • Nice to have knowledge of immutable infrastructure, infrastructure automation and provisioning tools • Strong understanding of Agile methodologies • Strong communication skills with ability to communicate complex technical concepts and align organization on decisions • Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply • Utilizes team collaboration to create innovative solutions efficiently • Passionate about technology and excited about the impact of emerging/disruptive technologies • Believes in culture of brutal transparency and trust • Open to learning new ideas outside scope or knowledge
Posted 18 hours ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Management Level E Equiniti is a leading international provider of shareholder, pension, remediation, and credit technology. With over 6000 employees, it supports 37 million people in 120 countries. EQ India began its operations in 2014 as a Global India Captive Centre for Equiniti, a leading fintech company specialising in shareholder management. Within a decade, EQ India strengthened its operations and transformed from being a capability centre to a Global Competency Centre, to support EQ's growth story worldwide. Capitalising on India’s strong reputation as a global talent hub for IT / ITES, EQ India has structured the organisation to be a part of this growth story. Today, EQ India has evolved as an indispensable part of EQ Group providing critical fintech services to the US and UK. Role summary: The Senior Technical Architect is part of a team responsible for technical leadership, governance, and infrastructure designs for EQ projects. The role ensures that technical systems and infrastructure are designed to support business requirements, technical and security standards, and technology strategy. Applicants should have detailed knowledge of IT Infrastructure, covering public cloud platforms (AWS preferred) and on premises data centre solutions. Prior experience as a Technical Architect is essential, along with strong skills in engaging stakeholders, collaborating across a range of technical and business disciplines to agree solutions, and presenting technical proposals and designs to review boards. Core Duties/Responsibilities: Maintaining engagement with the wider Equiniti environment by creating and communicating standards, governance processes, approved architecture models, systems and technologies deployed and corporate and IT strategies. Work across a range of EQ projects including data centre to AWS migration, platform upgrades, and new product implementations. Act as a key resource in the project lifecycle, driving initiation, reviewing requirements, completing the infrastructure design, and providing technical oversight for implementation teams. Support project initiation by providing cost and complexity assessments, engaging with stakeholders, and helping to define the scope of activities. Review requirements and undertake discovery activities to propose technical solutions that meet business needs while meeting technical standards for quality, supportability, and cost. Produce high quality technical designs and support the creation of build documentation providing effective technical solutions to EQ business requirements. Participate in architecture design reviews and other technical governance forums across the organisation representing the infrastructure architecture team across multiple projects. Contribute to knowledge management by adding to and supporting the maintenance of infrastructure architecture artifact repositories. Contribute to the definition and maintenance of architectural, security and technical standards, reflecting evolving technology and emerging best practice. Promote improvements to processes and standards within architecture teams, and the wider technology function. Skills, Knowledge & Experience: Skilled communicator, comfortable engaging a range of stakeholders, and capable of understanding business requirements and translating them into technical solutions. Experience creating high quality multi-tiered infrastructure designs for new and existing application services in accordance with defined standards. Experienced at providing cost estimates for on-premises and public cloud solutions. Experience across a range of data centre technologies such as server, storage, networks, virtualisation solutions. Experience of designing infrastructure solutions for public cloud platforms (AWS/Azure). Experience of working with complex network topologies and familiarity with a range of network technologies across on-premises and cloud environments. A track record of successfully achieving project deadlines, budgets, and meeting quality standards. Technical certification and knowledge of architecture and delivery frameworks a distinct advantage (AWS / Azure Solution Architect, CCNA, M365, TOGAF, Prince2, Agile). Technical Ability: In depth experience of proposing and designing technical solutions in across a wide range of technologies in an Enterprise environment. Core Microsoft technologies such as: Active Directory, Exchange, Hyper-V, M365, SharePoint, SQL, Windows Server. Public cloud platforms such as Amazon Web Services and Microsoft Azure. Deployment, configuration management and monitoring systems such as Terraform, Puppet, and New Relic. High availability and load balancing including Microsoft clustering and hardware load balancers. Physical infrastructure such as data centres, server hardware, hypervisors, SAN storage solutions, and network infrastructure. Infrastructure security platforms, tooling, and vulnerability assessment. Secure File Transfer Platforms such as Progress MoveIT. Familiarity with designing solutions to support a range of commercially available and bespoke applications. We are committed to equality of opportunity for all staff and applications from individuals are encouraged regardless of age, disability, sex, gender reassignment, sexual orientation, pregnancy and maternity, race, religion or belief and marriage and civil partnerships. Please note any offer of employment is subject to satisfactory pre-employment screening checks.
Posted 18 hours ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Join us as an Infrastructure Analyst Hone your analytical skills as you provide support to ensure the operational health of the platform, covering all aspects of service, risk, cost and people You’ll be supporting the platforms’ operational stability and technology performance, including maintaining any system’s utilities and tools provided by the platform This is an opportunity to learn a variety of new skills in a constantly evolving environment, working closely with feature teams to continuously enhance your development We're offering this role at associate level What you'll do As an Infrastructure Analyst, you’ll be providing input to and supporting the team’s activities to make sure that the platform integrity is maintained in line with technical roadmaps, while supporting change demands from domains or centres of excellence. You’ll be supporting the delivery of a robust production management service for relevant infrastructure platforms. In addition, you’ll be contributing to the delivery of customer outcomes, innovation and early learning by contributing to test products and services to identify early on if they are viable and deliver the desired outcomes. Your Role Will Involve Contributing to the platform risk culture, making sure that risks are discussed and understood at every step, and effectively collaborating to mitigate risk Contributing to the planning and execution of work within the platform and the timely servicing of feature development requests from cross platform initiatives, and supporting the delivery of regulatory reporting Participating and seeking out opportunities to simplify the platform infrastructure, architecture, services and customer solutions, guarding against introducing new complexity Building relationships with platform, domain and relevant cross-domain stakeholders Making sure that controls are applied and constantly reviewed, primarily against SOX, to ensure full compliance to all our policies and regulatory obligations The skills you'll need To succeed in this role, you’ll need to be a very capable communicator with the ability to communicate complex technical concepts clearly to colleagues, including management level. You’ll need a solid background working in an Agile or DevOps environment with continuous delivery and continuous integration. We’ll Also Look To You To Demonstrate At least four years of experience in Database Administration primarily MSSQL Server Experience of Administration and Managing of MSSQL Database Instances Knowledge in Unix, Windows Operating systems Knowledge On PowerShell, T-SQL’s language, technologies such as GitLab, Python, Terraform and AWS
Posted 18 hours ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Position Overview You will lead automation, CI/CD, and cloud infrastructure initiatives, partnering with Development, QA, Security, and IT Operations. You’ll balance hands-on implementation with strategic architecture, mentoring, and on-call support. Your expertise with containers, CI tools, and version control will help ensure reliability, scalability, and continuous improvement. ✅ Key Responsibilities : Design, build & maintain CI/CD pipelines using Jenkins (or equivalent), seamlessly integrating with Git for code version control Containerization & orchestration: Create Docker images, manage container lifecycles; deploy and scale services in Kubernetes clusters (typically self‑managed or cloud‑managed) Cloud infrastructure provisioning & automation: Use IaC tools like Terraform or Ansible to provision compute, networking, and storage in AWS/Azure/GCP cloud environments Monitoring, logging & observability: Implement solutions like Prometheus, ELK, Grafana or equivalent to monitor performance, set alerts, and troubleshoot production issues System reliability & incident management: Participate in on‑call rotation, perform root‑cause analysis, and own post‑incident remediation Security & compliance: Embed DevSecOps practices—container image scanning, IAM policies, secrets management, and vulnerability remediation Mentorship & leadership: Guide junior team members, propose process improvements, and help transition manual workflows to automated pipelines 🔧 Required Technical Skills (with proficiency) AreaRequired Skill (Rating)Experience or Focus Containers Docker (4/5) Image builds, Docker‑compose, multi‑stage CI integrations Orchestration Kubernetes (3.5/5) Daily operations in clusters—deployments, services, Helm usage Version Control Git (4/5) Branching strategy, pull requests, merge conflict resolution CI/CD Automation Jenkins (4/5) Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code Cloud Platforms AWS / Azure / GCP (4/5) Infrastructure provisioning, cost optimization, IAM setup Scripting & Automation Python, Bash, or equivalent Writing automation tools, CI hooks, server scripts Infrastructure as Code Terraform, Ansible, or similar Declarative templates, module reuse, environment isolation Monitoring & Logging Prometheus, ELK, Grafana, etc. Alert definitions, dashboards, log aggregation
Posted 19 hours ago
0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Position Overview You will lead automation, CI/CD, and cloud infrastructure initiatives, partnering with Development, QA, Security, and IT Operations. You’ll balance hands-on implementation with strategic architecture, mentoring, and on-call support. Your expertise with containers, CI tools, and version control will help ensure reliability, scalability, and continuous improvement. ✅ Key Responsibilities : Design, build & maintain CI/CD pipelines using Jenkins (or equivalent), seamlessly integrating with Git for code version control Containerization & orchestration: Create Docker images, manage container lifecycles; deploy and scale services in Kubernetes clusters (typically self‑managed or cloud‑managed) Cloud infrastructure provisioning & automation: Use IaC tools like Terraform or Ansible to provision compute, networking, and storage in AWS/Azure/GCP cloud environments Monitoring, logging & observability: Implement solutions like Prometheus, ELK, Grafana or equivalent to monitor performance, set alerts, and troubleshoot production issues System reliability & incident management: Participate in on‑call rotation, perform root‑cause analysis, and own post‑incident remediation Security & compliance: Embed DevSecOps practices—container image scanning, IAM policies, secrets management, and vulnerability remediation Mentorship & leadership: Guide junior team members, propose process improvements, and help transition manual workflows to automated pipelines 🔧 Required Technical Skills (with proficiency) AreaRequired Skill (Rating)Experience or Focus Containers Docker (4/5) Image builds, Docker‑compose, multi‑stage CI integrations Orchestration Kubernetes (3.5/5) Daily operations in clusters—deployments, services, Helm usage Version Control Git (4/5) Branching strategy, pull requests, merge conflict resolution CI/CD Automation Jenkins (4/5) Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code Cloud Platforms AWS / Azure / GCP (4/5) Infrastructure provisioning, cost optimization, IAM setup Scripting & Automation Python, Bash, or equivalent Writing automation tools, CI hooks, server scripts Infrastructure as Code Terraform, Ansible, or similar Declarative templates, module reuse, environment isolation Monitoring & Logging Prometheus, ELK, Grafana, etc. Alert definitions, dashboards, log aggregation
Posted 19 hours ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist In this role, you will: Providing technological guidance and leadership to the Scrum team memebers and encouraging team members to continue learning, to ensure organic growth of the team. Ensuring technical leadership is present within the team and that all deployment activities follows agreed standards and processes. Defining, implementing, and overseeing team processes in line with the wider DevOps Engineering team and HSBC procedures. Ensuring changes delivered by the team are compliant with HSBC Controls. Ensuring the team is compliant with HSBC policies both for personnel and services/applications managed by the team. Preparing the teams in DevOps Engineering for upcoming changes in the ways of working, available tooling and internal controls in use at the bank. Ensuring tasks and sprint planning is accurate and achievable. Acting as a Technical SME for the platforms hosting the Cybersecurity applications. Requirements To be successful in this role, you should meet the following requirements: 7+ years of experience in a DevOps role within an agile delivery environment. Experience designing and building highly scalable and resilient platforms and applications, including multisite resilience, load balancing, automatic failover, active-active implementations of application servers and database and storage backends. Experience working directly with every layer of the application service stack, from infrastructure (network, storage, servers), through operating system and middleware software (RHEL, Windows Server, nginx, Apache httpd, Gunicorn, PostgreSQL and MSSQL database backends), to application software (developed in-house). Experience working with third-party cloud computing platforms such as Ali Cloud, Amazon Web Services, Azure Cloud and Google Cloud with a focus on virtual computing, virtual networking and network services and managed container solutions. Experience in working with BigData related environment like CDP(Cloudera Data Platform), HDP (Hadoop Data Platform) , GCP(Google Cloud Platform) Experience working with virtualization, infrastructure as code, configuration as code, automation, centralized logging and monitoring, software versioning and secrets management technologies, such as KVM, VMWare,Terraform, Ansible, Jenkins, Application Dynamics, Splunk, GIT, Hashicorp Vault. Experience working with container technologies like Docker and orchestration technologies like Kubernetes. Experience in an DevOps working team. Excellent communication and mentoring skills You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSDI
Posted 19 hours ago
6.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description We are seeking a highly skilled and experienced Platform Engineer to manage and enhance our entire application delivery platform, from Cloudfront to the underlying EKS clusters and their associated components. The ideal candidate will possess deep expertise across cloud infrastructure, networking, Kubernetes, and service mesh technologies, coupled with strong programming skills. This role involves maintaining the stability, scalability, and performance of our production environment, including day-to-day operations, upgrades, troubleshooting, and developing in-house tools. Main Responsibilities Perform regular upgrades and patching of EKS clusters and associated components & oversee the health, performance, and scalability of the EKS clusters. Manage and optimize related components such as Karpenter (cluster autoscaling) and ArgoCD (GitOps continuous delivery). Implement and manage service mesh solutions (e.g., Istio, Linkerd) for enhanced traffic management, security, and observability. Participate in an on-call rotation to provide 24/7 support for critical platform issues and monitor the platform for potential issues and implement preventative measures. Develop, maintain, and automate in-house tools and scripts using programming languages like Python or Go to improve platform operations and efficiency. Configure and manage CloudFront distributions, WAF Policies for efficient & secure content delivery & routing. Develop and maintain documentation for platform architecture, processes, and troubleshooting guides. Tech Stack AWS: VPC, EC2, ECS, EKS, Lambda, Cloudfront, WAF, MWAA, RDS, ElastiCache, DynamoDB, Opensearch, S3, CloudWatch, Cognito, SQS, KMS, Secret Manager, KMS, MSK Terraform, Github Actions, Prometheus, Grafana, Atlantis, ArgoCD, OpenTelemetry Required Skills and Experiences Proven 6+ Years experience as a Platform Engineer, Site Reliability Engineer (SRE), or similar role with a focus on end-to-end platform ownership. In-depth knowledge and hands-on experience of at least 4 years with Amazon EKS and Kubernetes. Strong understanding and practical experience with Karpenter, ArgoCD, Terraform.. Solid grasp of core networking concepts and extensive experience of at least 5 years with AWS networking services (VPC, Security Groups, Network ACLs, CloudFront, WAF, ALB, DNS). Demonstrable experience with SSL/TLS certificate management. Proficiency in programming languages such as Python or Go for developing and maintaining automation scripts and internal tools. Experience with monitoring and logging tools (e.g., Prometheus, Grafana, ELK stack). Excellent problem-solving and debugging skills across complex distributed systems. Strong communication and collaboration abilities. Bachelor's degree in Computer Science, Engineering, or a related field (or equivalent practical experience). Preferred Qualifications Prior experience working with service mesh technologies (preferably Istio) in a production environment. Experience building or contributing to Kubernetes Controllers. Experience with multi-cluster Kubernetes architectures. Experience building AZ isolated, DR architectures. Remarks *Please note that you cannot apply for PayPay (Japan-based jobs) or other positions in parallel or in duplicate. PayPay 5 senses Please refer PayPay 5 senses to learn what we value at work. Working Conditions Employment Status Full Time Office Location Gurugram (Wework) ※The development center requires you to work in the Gurugram office to establish the strong core team.
Posted 19 hours ago
610.0 years
0 Lacs
Gurugram, Haryana, India
On-site
About the Role We are seeking a skilled Backend Engineer (.NET Core) with a strong focus on multitenant architecture, scalability, and performance. You will be a key contributor to our platform engineering team, responsible for building robust, scalable, and secure backend systems that support multiple tenants in a distributed cloud-native environment. This role is ideal for someone who combines technical depth in backend engineering with a passion for engineering excellence, non-functional requirements (NFRs), and platform scalability. Key Responsibilities Design and develop high-performance, scalable backend services using C# and .NET Core. Build and maintain RESTful APIs and microservices for a multitenant SaaS platform. Drive engineering best practices, including code quality, design patterns, and SOLID principles. Work with cloud platforms (Google Cloud Platform & Azure) to implement cloud-native, multitenant solutions. Implement and maintain containerized applications using Docker, Kubernetes, and Helm. Ensure robust handling of non-functional requirements, including: Tenant isolation Secure multi-tenancy Performance optimization Scalability across tenants Observability and tenant-specific monitoring Develop and maintain automated testing frameworks (unit, integration, E2E). Utilize CI/CD and GitOps workflows, leveraging tools such as Terraform and Helm, for Infrastructure as Code (IaC). Collaborate in Agile environments using Scrum or Kanban methodologies. Identify and proactively mitigate risks, dependencies, and bottlenecks to ensure optimal performance. Must-Have Skills 610 years of backend development experience with strong hands-on skills in: C#, .NET Core RESTful API development Experience in asynchronous programming, event-driven architecture, and Pub/Sub systems. Strong foundation in OOP, data structures, and algorithms. Proficient with Docker, Kubernetes, and Helm. Experience with CI/CD, GitOps, and Infrastructure as Code (Terraform, Helm). Strong understanding of multi-tenant architecture and NFRs: Tenant isolation Shared vs. isolated models Security and resource partitioning Performance tuning per tenant Proficient with relational databases (PostgreSQL preferred), with exposure to NoSQL. Experience working in Agile/DevOps environments. Nice-to-Have Skills Experience with frontend technologies (React.js) for occasional full-stack collaboration. Knowledge of modern API frameworks: gRPC, GraphQL, etc. Familiarity with feature flagging, blue-green deployments, and canary releases. Exposure to monitoring, logging, and alerting systems for multitenant environments. Preferred Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Certifications in Azure or Google Cloud Platform are a plus.
Posted 19 hours ago
0.0 - 1.0 years
0 - 0 Lacs
Indore, Madhya Pradesh
On-site
Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: ₹25,000.00 - ₹70,000.00 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) AI/ML: 1 year (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person
Posted 19 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Work Location : Trivandrum/Pune Role: Automation and DevSecOps Engineer Experience: 5-8 yrs Skills: Python Programming with Devops , SIEM+SOAR JOB Description :- At Company we take pride in delivering the comprehensive Information Security Services to our Internal customers. The Security services are based on Security platforms hosted on SaaS and IaaS platforms on private and public cloud service providers. The DevOps engineer will work in developing and implementing infrastructure in support of Web and backend applications deployment in the Information Security Services division. This position will closely work with TechOps team, Cloud providers and OEMs to ensure integration and automation towards efficient, clean and reusable code base to empower DevOps in our Security Services area. Key Skills: Collaborate with development and operations teams to design, build, and deploy applications/scripts that automate routine manual processes, with a strong focus on security orchestration and automated response (SOAR) capabilities. Proficient in developing Python scripts to automate routine tasks using cron jobs, scheduler services,or any workflow management tools, particularly in the context of security automation. Ability to work closely with the operations team to understand, support, and resolve all technical challenges in routine operations, with a focus on enhancing security measures through automation. Identify areas for improvement in existing programs and develop modifications that enhance security and automate response actions. Possess strong analytical and debugging skills, especially in the context of security incident response and automation. Demonstrated experience in integrating REST API frameworks with third-party applications, particularly for security orchestration purposes. Knowledge of DevOps tools like Jenkins, Terraform, AWS CloudFormation, and Kubernetes, with an emphasis on their use in security automation. Hands-on experience working with DBMS like MySQL, PostgreSQL, and NoSQL, with a focus on secure data management. Comfortable working with Linux OS, especially in environments requiring secure configurations and automated security responses. Keen interest and proven track record in automation both on-premise and in the cloud, with a focus on security orchestration and automated response. Expertise in Git, Jenkins, and JIRA is a plus, particularly in the context of managing security-related projects. Primarily Skill required: Deep understanding of security concepts and the ability to work with security analysts to implement automation requirements for security orchestration and automated response. Scripting – Python (Mandatory), JavaScript/Shell scripting, with a focus on security automation. Jenkins, GitHub Actions - CI/CD, with a focus on automating security processes. Containerized infrastructure management – Docker, Podman, K8s, with an emphasis on secure deployments. AWS, Azure – ability to provision and manage infrastructure securely. Version control systems - Git, with a focus on managing security-related code and configurations. Good to have Entry level security certification (CompTIA Security+ or similar) Ansible knowledge Understanding of reporting tools – e.g. Grafana Initial exposure for Google Security Operations (SIEM+SOAR) suite Educational & Professional Qualifications Bachelor’s / Master’s full-time degree in a Technical stream
Posted 19 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Position Overview You will lead automation, CI/CD, and cloud infrastructure initiatives, partnering with Development, QA, Security, and IT Operations. You’ll balance hands-on implementation with strategic architecture, mentoring, and on-call support. Your expertise with containers, CI tools, and version control will help ensure reliability, scalability, and continuous improvement. ✅ Key Responsibilities : Design, build & maintain CI/CD pipelines using Jenkins (or equivalent), seamlessly integrating with Git for code version control Containerization & orchestration: Create Docker images, manage container lifecycles; deploy and scale services in Kubernetes clusters (typically self‑managed or cloud‑managed) Cloud infrastructure provisioning & automation: Use IaC tools like Terraform or Ansible to provision compute, networking, and storage in AWS/Azure/GCP cloud environments Monitoring, logging & observability: Implement solutions like Prometheus, ELK, Grafana or equivalent to monitor performance, set alerts, and troubleshoot production issues System reliability & incident management: Participate in on‑call rotation, perform root‑cause analysis, and own post‑incident remediation Security & compliance: Embed DevSecOps practices—container image scanning, IAM policies, secrets management, and vulnerability remediation Mentorship & leadership: Guide junior team members, propose process improvements, and help transition manual workflows to automated pipelines 🔧 Required Technical Skills (with proficiency) AreaRequired Skill (Rating)Experience or Focus Containers Docker (4/5) Image builds, Docker‑compose, multi‑stage CI integrations Orchestration Kubernetes (3.5/5) Daily operations in clusters—deployments, services, Helm usage Version Control Git (4/5) Branching strategy, pull requests, merge conflict resolution CI/CD Automation Jenkins (4/5) Pipeline scripting (Groovy/Pipeline), plugin ecosystem, pipeline as code Cloud Platforms AWS / Azure / GCP (4/5) Infrastructure provisioning, cost optimization, IAM setup Scripting & Automation Python, Bash, or equivalent Writing automation tools, CI hooks, server scripts Infrastructure as Code Terraform, Ansible, or similar Declarative templates, module reuse, environment isolation Monitoring & Logging Prometheus, ELK, Grafana, etc. Alert definitions, dashboards, log aggregation
Posted 19 hours ago
7.0 - 9.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Cloud Systems Engineer (Big Data) Technology: Google/AWS/Azure public cloud platform, Big Query / Airflow / Data Flow/Dataproc/ PySpark, Terraform, Ansible, Jenkins, Linux and Git. Location: Chennai/Hyderabad, India Experience: Overall 7 to 9 years Job Summary: We are seeking a Senior Big Data Systems Engineer who is interested to lead a high performing team of Cloud engineers/data engineers for a larger Big Data upstream environment hosted on On-Prem and cloud. The candidate should possess in-depth knowledge in RedHat on Google cloud. Job Description: Skills, Roles and Responsibilities (Google/AWS/Azure public cloud, PySpark, Big Query and Google Airflow) Participate in 24x7x365 SAP Environment rotational shift support and operations As a team lead you will be responsible for maintaining the upstream Big Data environment day in day out where millions of financial data flowing through, consists of PySpark, Big Query, Dataproc and Google Airflow You will be responsible for streamlining and tuning existing Big Data systems and pipelines and building new ones. Making sure the systems run efficiently and with minimal cost is a top priority Manage the operations team in your respective shift. You will be making changes to the underlying systems This role involves providing day-to-day support, enhancing platform functionality through DevOps practices, and collaborating with application development teams to optimize database operations. Architect and optimize data warehouse solutions using BigQuery to ensure efficient data storage and retrieval. Install/build/patch/upgrade/configure big data applications Manage and configure BigQuery environments, datasets, and tables. Ensure data integrity, accessibility, and security in the BigQuery platform. Implement and manage partitioning and clustering for efficient data querying. Define and enforce access policies for BigQuery datasets. Implement query usage caps and alerts to avoid unexpected expenses. Should be very comfortable with troubleshooting Linux-based systems on issues and failures with good grasp of the Linux command line. Create and maintain dashboards and reports to track key metrics like cost, performance. Integrate BigQuery with other Google Cloud Platform (GCP) services like Dataflow, Pub/Sub, and Cloud Storage. Enable BigQuery through tools like Jupyter notebook, Visual Studio code, other CLI's. Implement data quality checks and data validation processes to ensure data integrity. Manage and monitor data pipelines using Airflow and CI/CD tools (e.g., Jenkins, Screwdriver) for automation. Collaborate with data analysts and data scientists to understand data requirements and translate them into technical solutions. Provide consultation and support to application development teams for database design, implementation, and monitoring. Proficiency in Unix/Linux OS fundamentals, shell/perl/python scripting, and Ansible for automation. Disaster Recovery & High Availability. Expertise in planning and coordinating disaster recovery principles, including backup/restore operations. Experience with geo-redundant databases and Red Hat cluster. Accountable for ensuring that delivery is within the defined SLA and agreed milestones (projects) by following best practices and processes for continuous service improvement. Work closely with other Support Organizations (DB, Google, PySpark data engineering and Infrastructure teams). Incident Management, Change Management, Release Management and Problem Management processes based on ITIL Framework.
Posted 19 hours ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About us slice the way you bank slice’s purpose is to make the world better at using money and time, with a major focus on building the best consumer experience for your money. We’ve all felt how slow, confusing, and complicated banking can be. So, we’re reimagining it. We’re building every product from scratch to be fast, transparent, and feel good, because we believe that the best products transcend demographics, like how great music touches most of us. Our cornerstone products and services: slice savings account, slice UPI credit card, slice UPI, slice UPI ATMs, slice fixed deposits, slice borrow, and UPI-powered bank branch are designed to be simple, rewarding, and completely in your control. At slice, you’ll get to build things you’d use yourself and shape the future of banking in India. We tailor our working experience with the belief that the present moment is the only real thing in life. And we have harmony in the present the most when we feel happy and successful together. We’re backed by some of the world’s leading investors, including Tiger Global, Insight Partners, Advent International, Blume Ventures, and Gunosy Capital. About the role We are looking for a Site Reliability Engineer with experience in building and implement functional systems that improve customer experience. Site Reliability Engineer responsibilities include deploying product updates, identifying production issues and implementing integrations that meet customer needs. Ultimately, you will execute and automate operational processes fast, accurately and securely. What you'll do Designing and implementation of IT Infra including Networking, Storage, Compute, Backup and Security. Design and implement power distribution systems, optimize power usage efficiency and ensure redundancy to minimize downtime risks. Architect network infrastructure for data center and cloud environments, including switches, routers, firewalls, VPC security groups, transient gateway etc Implement high-speed interconnects and design network topologies to support scalable and resilient connectivity. Architect storage solutions (NAS/SAN, blockstore, filestore) tailored to meet performance, capacity, and data protection requirements. Optimize compute resources through virtualization/containerization technologies like VMWare ESX, Red Hat Openshift, Microsoft HyperV and Nutanix acropolis. Design fault-tolerant architectures to ensure high availability and minimize service disruptions. Develop rack layouts and configurations to maximize space utilization. Deep diving into Linux server issues and automation of configuration & deployment Documentation of systems processes and runbook. Manage data center vendor team and cable new servers, decommission old servers and manage system inventory Ensure successful execution of IT strategies, architecture guidelines, and standards and guide project teams through the technology selection and architecture/security governance processes Manage and maintain the Cloud DevOps pipeline and work with dev teams. Look for opportunities to optimize and enable consistent automated deployments. Monitor standards/policy compliance by developing and executing governance processes and tools. Provide mentoring and knowledge transfer to others, and promote open culture and DevOps. Participate in incident response and post-mortem activities to identify root causes and prevent recurrence. Proactively identify and address performance bottlenecks, reliability issues, and security vulnerabilities. Basic Qualification : 5+ years of experience in the field Experience in NAS, SAN, Block storage, File storage Experience in virtualization platforms like VMWare ESX, Red Hat Openshift, Nutanix acropolis, Microsoft HyperV. Working knowledge, networking, switching, routing, firewalls. Good understanding of Linux Expertise in Go, TypeScript, GIT, Terraform Solid understanding of monitoring and logging solutions (e.g., Prometheus, Grafana, ELK stack). Experience with CI/CD pipelines and DevOps practices. Hands-on experience with Public Cloud AWS/GCP Working knowledge of Kubernetes Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependants. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.
Posted 19 hours ago
8.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role- VP of Engineering Experience- 8-10 years Location- Bangalore Notice Period- upto 30 days What You’ll Own: ● Hands-On Technical Leadership & Core Tech Stack Development ○ Architect and code the first scalable version of our booking portal, routing engine, mobile-first CRM, and operational dashboard. ○ Contribute directly to the codebase, setting the standard for engineering excellence and coding culture. ○ Build systems that handle India-scale logistics, real-time demo scheduling, and payment flows, with high reliability and low latency. ○ Lead backend architecture and microservices strategy using tools like Go, Node.js, Kafka, Postgres, Redis, Kubernetes, and Terraform. ○ Coordinate API strategy across frontend (React), mobile (React Native), and edge interfaces, using GraphQL and gRPC contracts. ● Full-Stack & Platform Ownership ○ Collaborate with frontend engineers on React-based interfaces; enforce design system and performance best practices. ○ Work closely with mobile engineers on React Native, helping optimize cold-start time, offline sync, and background processing. ○ Oversee API versioning, mobile/web contract integrity, and cross-platform interface stability. ○ Enable observability, tracing, and proactive alerting across the platform (Grafana, Prometheus, Sentry, etc.). ● Systems Thinking, Automation & DevOps ○ Design and implement scalable, resilient, and modular backend architectures (evolving from monolith to microservices). ○ Integrate and automate CRM, logistics, inventory, payments, and customer apps into a cohesive real-time ERP-lite system. ○ Champion CI/CD pipelines, zero-downtime deploys, infrastructure as code (Terraform), and rollback safety protocols. ○ Set and uphold engineering SLAs and SLOs (e.g., 99.9% uptime, sub-1s booking latency). ● AI-Enabled Systems & Innovation ○ Drive the integration of AI/ML into operational workflows: predictive routing, lead scoring, demand forecasting, personalized journeys. ○ Collaborate with data and product teams to deploy models using frameworks like TensorFlow, PyTorch, or OpenAI APIs. ○ Ensure infrastructure supports scalable ML workflows and retraining cycles. ● Security, Compliance & Performance ○ Implement secure coding practices and enforce API security (OAuth2, RBAC, audit logging). ○ Lead efforts around payment data protection, customer data privacy, and infra-level security (SOC 2 readiness). ○ Champion system performance tuning, cost optimization, and scalability testing (load testing, caching, indexing). ● Leadership & Cross-Functional Collaboration ○ Hire, mentor, and grow engineers across specializations: backend, frontend, mobile, data, and DevOps. ○ Foster a culture of autonomy, excellence, ownership, and rapid iteration. ○ Collaborate with Product, Design, Ops, and CX to shape roadmap, triage bugs, and ship high- impact features. Qualifications: ● Technical Depth: Proven track record of designing, building, and scaling complex software systems from scratch. Strong proficiency in at least one modern backend language (e.g., Go, Python, Node.js, Java) and experience with relevant frameworks and databases. ● Architectural Acumen: Demonstrated ability to architect scalable, fault-tolerant, and secure systems. Experience with distributed systems, microservices, message queues (Kafka, RabbitMQ), and cloud-native architectures (Kubernetes, Docker). ● Hands-on Experience: A genuine passion for coding and a willingness to be hands-on with technical challenges, debugging, and code reviews. ● AI/ML Exposure: Experience with integrating AI/ML models into production systems, understanding of data pipelines for AI, and familiarity with relevant tools/frameworks (e.g., TensorFlow, PyTorch, scikit-learn) is highly desirable. ● Leadership & Mentorship: Experience leading and mentoring engineering teams, fostering a collaborative and high-performance environment. Ability to attract, hire, and retain top engineering talent. ● Problem-Solving: Exceptional analytical and problem-solving skills, with a pragmatic approach to delivering solutions in a fast-paced, ambiguous environment.
Posted 19 hours ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: SRE Engineer with GCP cloud Location: Hyderabad & Ahmedabad Work Model: Hybrid 3 Days from office Exp in year: 6+years Job Overview Dynamic, motivated individuals deliver exceptional solutions for the production resiliency of the systems. The role incorporates aspects of software engineering and operations, DevOps skills to come up with efficient ways of managing and operating applications. The role will require a high level of responsibility and accountability to deliver technical solutions. Summary: As a Senior SRE, you will ensure platform reliability, incident management, and performance optimization. You'll define SLIs/SLOs, contribute to robust observability practices, and drive proactive reliability engineering across services. Roles and Responsibilities: · Define and measure Service Level Indicators (SLIs), Service Level Objectives (SLOs), and manage error budgets across services. · Lead incident management for critical production issues – drive root cause analysis (RCA) and post-mortems. · Create and maintain run books and standard operating procedures for high[1]availability services. · Design and implement observability frameworks using ELK, Prometheus, and Grafana; drive telemetry adoption. · Coordinate cross-functional war-room sessions during major incidents and maintain response logs. · Develop and improve automated system recovery, alert suppression, and escalation logic. · Use GCP tools like GKE, Cloud Monitoring, and Cloud Armor to improve performance and security posture. · Collaborate with DevOps and Infrastructure teams to build highly available and scalable systems. · Analyse performance metrics and conduct regular reliability reviews with engineering leads. · Participate in capacity planning, failover testing, and resilience architecture reviews. Mandatory: · Cloud: GCP (GKE, Load Balancing, VPN, IAM) · Observability: Prometheus, Grafana, ELK, Data dog · Containers & Orchestration: Kubernetes, Docker · Incident Management: On-call, RCA, SLIs/SLOs · IaC: Terraform, Helm · Incident Tools: PagerDuty, OpsGenie Nice to Have: · GCP Monitoring, Sky walking · Service Mesh, API Gateway · GCP Spanner, MongoDB (basic)
Posted 20 hours ago
8.0 years
0 Lacs
India
On-site
Location: Chennai Experience: 8+ Years Open Position: 6+ Employment Type: Full-time Industry: Payments Job Summary: We are looking for a highly experienced and technically proficient Senior Java Lead with a strong foundation in Core Java , NEST.JS/ GoLang , and Payment domain expertise . The ideal candidate will lead development teams, architect scalable systems, and deliver high-performance payment solutions. Key Responsibilities: Lead end-to-end design, development, and deployment of enterprise-grade applications. Architect and implement scalable, secure, and high-availability payment systems. Collaborate with cross-functional teams to define technical requirements and deliverables. Conduct code reviews, mentor team members, and enforce best practices. Troubleshoot complex issues and optimize system performance. Stay current with emerging technologies and trends in the payments and fintech space. Specific projects mentioned include AML (Anti-Money Laundering), Technical Skills Required: Core Technologies: Core Java (8+) , Nest.js (1+) / GoLang (1+) Spring Framework , Spring Boot , Spring Security RESTful APIs , GraphQL RestAPI/Grpc - API connectivity Databases & Messaging: Relational DBs : PostgreSQL, MySQL, Oracle NoSQL : MongoDB, Cassandra, DynamoDB Caching : Redis, Memcached Messaging Systems : Apache Kafka, RabbitMQ, ActiveMQ Security & Compliance: OAuth2 , JWT , SSL/TLS PCI-DSS , ISO 8583 , Tokenization , Data Encryption Compliance with PCI DSS standards, including security headers and authentication Cloud & DevOps:(Desirable) AWS , GCP , or Azure (EC2, S3, Lambda, IAM, etc.) Docker , Kubernetes , Helm CI/CD tools : Jenkins, GitLab CI, GitHub Actions Infrastructure as Code : Terraform, CloudFormation Preferred Qualifications: Familiarity with Open Banking APIs and real-time payment systems . Exposure to Agile/Scrum methodologies and tools like Jira, Confluence. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field.
Posted 20 hours ago
8.0 years
0 Lacs
Visakhapatnam, Andhra Pradesh, India
On-site
Job Summary: We are looking for a dynamic and experienced Technical Lead to guide the end-to-end development of scalable and high-performing applications. The ideal candidate should have deep expertise in REST APIs, backend and frontend development , and modern CI/CD practices . This role demands strong technical leadership to provide architectural direction, mentor team members, and drive technical decisions that align with business goals. Key Responsibilities: Lead full-stack development efforts with strong backend ownership and oversight of frontend integration. Architect, design, and develop RESTful APIs and microservices-based solutions. Oversee and contribute to frontend implementation using modern JS frameworks (e.g., React, Angular ). Drive and maintain CI/CD pipelines and automated deployment processes. Collaborate with stakeholders to understand requirements and translate them into technical solutions . Ensure code quality through regular code reviews and best practice enforcement. Lead sprint planning, technical discussions, and system design sessions. Mentor and guide junior developers , fostering a high-performing team culture. Identify technical risks and provide mitigation strategies. Communicate effectively across product, QA, DevOps, and management teams. Required Skills: 8+ years of experience in full-stack development , including at least 2 years in a lead role. Strong backend development experience in .NET / Node.js / Java / Python (based on your stack). Solid frontend experience with React / Angular / Vue.js . Hands-on experience with REST API design and integration. Expertise in CI/CD pipelines and DevOps tooling (e.g., GitHub Actions, Jenkins, Azure DevOps). Familiarity with containerization (Docker) and cloud platforms (AWS, Azure, or GCP) . Experience in technical design, architecture , and delivering production-grade systems. Strong interpersonal and communication skills – able to interact with both technical and non-technical stakeholders. Proven ability to lead a development team , resolve conflicts, and ensure project delivery. Nice to Have: Experience with microservices architecture and event-driven systems. Exposure to infrastructure as code (e.g., Terraform, ARM templates). Cloud certification (e.g., AWS Certified Solutions Architect, Azure Architect Expert). Experience with monitoring/logging tools (e.g., ELK, Prometheus).
Posted 20 hours ago
10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job description: Purpose of the Job The candidate for Manager – DevOps / Release Management is liable for automating all the manual tasks for developing and deploying code and data to implement continuous deployment and continuous integration frameworks. They are also held responsible for maintaining high availability of production and non-production work environment. This is a hands-on leadership role, requiring the ability to directly contribute to the implementation and optimization of CI/CD pipelines, infrastructure automation, and release management practices. Job Description Design, implement, and manage robust CI/CD pipelines and infrastructure automation frameworks for applications and data services. Oversee and execute release management processes across environments, ensuring governance, repeatability, and quality. Lead the development and maintenance of infrastructure as code and config management tools (e.g., Terraform, Ansible). Improve efficiency of Release Management process, with a focus on quality and minimizing post-production issues. Proactively monitor production and non-production systems to ensure high availability, scalability, and performance. Identify and resolve deployment issues in real time and implement monitoring, alerting, and rollback mechanisms. Collaborate with development, QA, and product teams to support seamless integration and deployment of features. Guide and mentor junior DevOps engineers, instilling DevOps best practices and reusable frameworks. Drive DevSecOps adoption, integrating security checks and compliance into the release lifecycle. Stay current on industry trends and continuously assess tools, frameworks, and approaches that improve operational efficiency. Manage hybrid and cloud-native deployments, with a strong focus on AWS, while supporting Azure and on-prem transitions. Own technical documentation and process design for release pipelines, deployment architecture, and system automation. Help transform the Release Management function from the ground up, including strategy, team development, tooling, and governance. Job Requirements - Experience and Education Bachelor's degree in Computer Science, Engineering, or a related technical field. Minimum of 10+ years of experience in DevOps, Site Reliability Engineering, or Release Management roles. Strong hands-on experience with CI/CD tools such as Jenkins, TeamCity, GitHub Actions, Octopus Deploy, or similar. Proven experience in cloud platforms: AWS (preferred), Azure, or OCI. Skilled in scripting and automation (Python, Shell, Bash) and infrastructure as code (Terraform, CloudFormation). Experience managing release workflows, branching strategies, versioning, and rollbacks. Working knowledge of containerization and orchestration (e.g., Docker, Kubernetes, ECS). Familiarity with monitoring/logging tools (e.g., Datadog, Prometheus, ELK Stack). Strong communication and stakeholder management skills, capable of cross-functional collaboration. Experience working with distributed systems and data platforms (e.g., Elasticsearch, Cassandra, Hadoop) is a plus. Leadership Behaviors Building Outstanding Teams Setting a clear direction Simplification Collaborate & break silos Execution & Accountability Growth mindset Innovation Inclusion External focus Skills
Posted 20 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France