Jobs
Interviews

946 Gitops Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

3 - 4 Lacs

Bengaluru

On-site

Minimum Required Experience : 8 years Contract Skills Twistlock Sonarqube Shell Scripting Groovy GitLab Vmware Vault Kubernetes Chef Agile Aws Cloud Linux & Windows Aws Vpc Gitlab Cicd Docker Aws Ec2/S3/Rds Jenkins Python Ansible Terraform Description Key Responsibilities Collaborate with and contribute to the DevOps architecture and platform of Central DevOps team. Own DevOps backlog, planning and delivery for a Product team assigned Apply GitOps - Infra as Code / DevOps as a Platform Service. Implement and enable CI CD DevOps as a Platform for product development teams. Contribute to Design documentation and product DHF (requirements, design, verification, TDRs) Work in Agile development model and participate in program and product increment activities. Drive FMEA Deliver Runbook/ User Manuals/ Standard Operating procedures. Develop modules as re-usable components. Infra and Product Monitoring and alerting Response to incidents – engineering and production Own Site Reliability Engineering activities SRE - Certificate renewals, Infra uptimes Prioritise and Resolve Incoming/ Open issues. Interface with Test automation teams to create and execute Continuous Test pipelines. Primary Skill Set Recommended Experience : 8-12 years Must have hands on expertise: Cloud: AWS OS: Linux, Windows Build Orchestrator: GitLab CICD, Jenkins Programming: Python, Groovy, Shell SCM: GitLab Artifact Management: Artifactory Quality: Sonar, Twistlock Automation Tools: Terraform, Ansible, Chef, Vault Containers/ Virtualization: Docker, K8S, VMWare Cloud: AWS (VPC, EC2, EKS, RDS, Networking …) Process: GitOps, GitFlow, Branching, Versioning, Tagging, Release Tools / Process: Agile, Confluence, Rally Good to have: Medical domain knowledge GE Healthcare QMS process & quality management GE Documentation tools- Myworkshop

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Role: Technical Analyst- DevOps- Senior Level Job Locations: Noida, Uttar Pradesh, India Required Experience: 8 - 12 Years Skills: CI/CD Pipeline, AWS, GITLAB, GCP, AZURE, Deep Source, Sonarqube Share JOB DESCRIPTION We are looking for a highly skilled and motivated DevOps Engineer to join our dynamic team. As a DevOps Engineer, you will be responsible for managing our infrastructure, CI/CD pipelines, and automating processes to ensure smooth deployment cycles. The ideal candidate will have a strong understanding of cloud platforms (AWS, Azure, GCP), version control tools (GitHub, GitLab), CI/CD tools (GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies)), and the ability to work in a fast-paced environment. RESPONSIBILITIES Design, implement, and manage CI/CD pipelines using GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies). Manage and automate the deployment of applications on cloud platforms such as AWS, GCP, and Azure. Maintain and optimize cloud-based infrastructure, ensuring high availability, scalability, and performance. Utilize GitHub and GitLab for version control, branching strategies, and managing code repositories. Collaborate with development, QA, and operations teams to streamline the software delivery process. Monitor system performance and resolve issues related to automation, deployments, and infrastructure. Implement security best practices across CI/CD pipelines, cloud resources, and other environments. Troubleshoot and resolve infrastructure issues, including scaling, outages, and performance degradation. Automate routine tasks and infrastructure management to improve system reliability and developer productivity. Stay up to date with the latest DevOps practices, tools, and technologies. REQUIRED SKILLS At least 8 years’ experience as DevOps Engineer. Proven experience as a DevOps Engineer, Cloud Engineer, or similar role. Expertise in CI/CD tools, including GitHub Actions, Jenkins, Azure DevOps, and Agro CD (GitOps methodologies). Strong proficiency with GitHub and GitLab for version control, repository management, and collaborative development. Extensive experience working with cloud platforms such as AWS, Azure, and Google Cloud Platform (GCP). Solid understanding of infrastructure-as-code (IaC) tools like Terraform or CloudFormation. Experience with containerization technologies like Docker and orchestration tools like Kubernetes. Knowledge of monitoring, logging, and alerting systems (e.g., Prometheus, Grafana, ELK stack). Experience in scripting languages such as Python, Bash, or PowerShell. Strong knowledge of networking, security, and performance optimization in cloud environments. Familiarity with Agile development methodologies and collaboration tools. Education B.Tech/M Tech/MBA/BE/MCA Degree Sierra Development is a leading North America based software development company. SD Global Services ( www.sierradev.in )is a wholly owned subsidiary of ‘Sierra Development LLC’ (www.sierradev.com) and backed by The Riverside Company (www.riversidecompany.com) a global private equity industry leader in the US. Riverside was founded in 1988 with multiple office locations. We provide The Riverside Companies access to highly trained software development resources.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Madurai, Tamil Nadu, India

On-site

Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3–5 years of hands-on experience in GCP Data Service Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications: GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles What We Offer: Opportunity to work on large-scale data modernization projects with global clients A fast-growing company with a strong tech and people culture Competitive salary, benefits, and flexibility Collaborative environment that values innovation and leadership

Posted 2 weeks ago

Apply

0.0 - 3.0 years

25 - 35 Lacs

Madurai, Tamil Nadu

On-site

Dear Candidate, Greetings of the day!! I am Kantha, and I'm reaching out to you regarding an exciting opportunity with TechMango. You can connect with me on LinkedIn https://www.linkedin.com/in/kantha-m-ashwin-186ba3244/ Or Email: kanthasanmugam.m@techmango.net Techmango Technology Services is a full-scale software development services company founded in 2014 with a strong focus on emerging technologies. It holds a primary objective of delivering strategic solutions towards the goal of its business partners in terms of technology. We are a full-scale leading Software and Mobile App Development Company. Techmango is driven by the mantra “Clients Vision is our Mission”. We have a tendency to stick on to the current statement. To be the technologically advanced & most loved organization providing prime quality and cost-efficient services with a long-term client relationship strategy. We are operational in the USA - Chicago, Atlanta, Dubai - UAE, in India - Bangalore, Chennai, Madurai, Trichy. Techmangohttps://www.techmango.net/ Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms. Minimum 3–5 years of hands-on experience in GCP Data Service. Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner. Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema). Experience with real-time data processing, streaming architectures, and batch ETL pipelines. Good understanding of IAM, networking, security models, and cost optimization on GCP. Prior experience in leading cloud data transformation projects. Excellent communication and stakeholder management skills. Preferred Qualifications: GCP Professional Data Engineer / Architect Certification. Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics. Exposure to AI/ML use cases and MLOps on GCP. Experience working in agile environments and client-facing roles. What We Offer: Opportunity to work on large-scale data modernization projects with global clients. A fast-growing company with a strong tech and people culture. Competitive salary, benefits, and flexibility. Collaborative environment that values innovation and leadership. Job Type: Full-time Pay: ₹2,500,000.00 - ₹3,500,000.00 per year Application Question(s): Current CTC ? Expected CTC ? Notice Period ? (If you are serving Notice period please mention the Last working day) Experience: GCP Data Architecture : 3 years (Required) BigQuery: 3 years (Required) Cloud Composer (Airflow): 3 years (Required) Location: Madurai, Tamil Nadu (Required) Work Location: In person

Posted 2 weeks ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About SpotDraft SpotDraft is an end-to-end CLM for high-growth companies. We are building a product to ensure convenient, fast and easy contracting for businesses. We know the potential to be unlocked if legal teams are equipped with the right kind of tools and systems. So here we are, building them. Currently, customers like PhonePe, Chargebee, Unacademy, Meesho and Cred use SpotDraft to streamline contracting within their organisations. On average, SpotDraft saves legal counsels within the company 10 hours per week and helps close deals 25% faster. Job Summary: As a Jr. DevOps Engineer, you will be responsible for planning, building and optimizing the Cloud Infrastructure and CI/CD pipelines for the applications which power SpotDraft. You will be closely working with Product Teams across the organization and help them ship code and reduce manual processes. You will directly work with the Engineering Leaders including the CTO to deliver the best experience for users by ensuring high availability of all systems. We follow the GitOps pattern to deploy infrastructure using Terraform and ArgoCD. We leverage tools like Sentry, DataDog and Prometheus to efficiently monitor our Kubernetes Cluster and Workload. Key Responsibilities: Developing and maintaining CI/CD workflows on Github Provisioning and maintaining cloud infrastructure on GCP and AWS using Terraform Set up logging, monitoring and alerting of applications and infrastructure using DataDog and GCP Automate deployment of applications to Kubernetes using ArgoCD, Helm, Kustomize and Terraform Design and promote efficient DevOps process and practices Continuously optimize infrastructure to reduce cloud costs Requirements: Proficiency with Docker and Kubernetes Proficiency in git Proficiency in any scripting language (bash, python, etc..) Experience with any of the major clouds Experience working on linux based infrastructure Experience with open source monitoring tools like Prometheus Experience with any ingress controllers (nginx, traefik, etc..) Working at SpotDraft When you join SpotDraft, you will be joining an ambitious team that is passionate about creating a globally recognized legal tech company. We set each other up for success and encourage everyone in the team to play an active role in building the company. An opportunity to work alongside one of the most talent-dense teams. An opportunity to build your professional network through interacting with influential and highly sought-after founders, investors, venture capitalists and market leaders. Hands-on impact and space for complete ownership of end-to-end processes. We are an outcome-driven organisation and trust each other to drive outcomes whilst being audacious with our goals. ‘ Our Core Values Our business is to delight Customers Be Transparent. Be Direct Be Audacious Outcomes over everything else Be 1% better every day Elevate each other Be passionate. Take Ownership

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Infrastructure Architects are the key link between Kyndryl and our clients. You’re in a technical leadership role, uniting and guiding stakeholders from clients, governance, and project executives to delivery and sometimes even the vendors who work with the client. You’ll be there from the start of a project — understanding what’s needed and figuring out the best technical solution. And you’ll be there at the finish, delivering the right product on time and within budget. As an Infrastructure Architect, you’ll draw upon the full breadth of your talent and experience. This is a technical leadership role, so we want you to bring your vision, knowledge, and leadership to each project. To the client, you’re the subject matter expert – consulting early, gathering inputs, understanding what they need from our solution. You define what Kyndryl can do to meet this solution. You design the best solution for the job. And finally, you’re the tech leader for implementation. At Kyndryl we support all major cloud platforms, so you’ll get the chance to use everything you know – and then some. You’ll also become expert at knowing when and how to call on other SMEs outside your wheelhouse. Thinking your way around pre-existing limitations will grow your creativity and flexibility. You’ll learn a lot here, and if you want to work toward certifications there are plenty of opportunities.The rewards for all this are many. You’ll get to influence, create, and deliver something from start to finish. You will have the power to delight our clients. Your future at Kyndryl This role opens the door to many career paths, both vertical and horizontal, and there may be opportunity to travel. It’s a great chance for database administrators or other techs to break into the cloud. It’s also a solid path to become enterprise or chief architect or a distinguished engineer! Whatever you see for yourself, you’ll find the opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience : 5+ years Hands-on experience in cloud infrastructure setup, management, and optimization across Azure and AWS, including multi-account/subscription environments. Expertise in Infrastructure as Code (IaC) using Terraform, Bicep, and CloudFormation to provision and manage scalable cloud environments. Proficiency in CI/CD tools such as GitHub Actions, GitLab CI/CD, and Azure DevOps Pipelines, with strong understanding of pipeline design, artifacts, and release strategies. Strong scripting skills in Python, PowerShell, and Bash for automation, integration, and task orchestration. Experience with event-driven compute and serverless technologies, especially Azure Functions and AWS Lambda, for automation and microservices implementation. Good understanding of RESTful APIs and integration patterns, with practical experience in building and consuming APIs for automation, monitoring, and data flow. Working knowledge of containerization and orchestration tools like Docker and Kubernetes (AKS/EKS), with understanding of deployment, scaling, and monitoring. Proficiency in YAML and JSON for configuration management, deployment definitions, and data serialization in IaC and CI/CD contexts. Deep understanding of DevOps principles and practices, including shift-left testing, GitOps, infrastructure automation, and feedback loops. Experience with secret management tools such as Azure Key Vault, AWS Secrets Manager, and integration with pipelines and workloads. Strong analytical and troubleshooting skills to diagnose and resolve infrastructure, deployment, and performance issues across complex cloud environments. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams, business stakeholders, and engineering leaders. Team-oriented mindset, with a focus on shared ownership, collaborative problem-solving, and knowledge sharing. Continuous learning attitude, with a strong interest in staying updated with the latest cloud technologies, industry trends, and platform innovations. Preferred Technical and Professional Experience: Experience managing and optimizing multi-cloud environments, including resource governance, policy enforcement, and workload placement strategies. In-depth knowledge of serverless architecture, leveraging services like Azure Functions, AWS Lambda, Event Grid, and Step Functions for scalable and cost-efficient solutions. Familiarity with API gateway services, such as Azure API Management and AWS API Gateway, including securing, versioning, and monitoring APIs. Knowledge of cloud cost optimization tools and techniques, including Azure Cost Management, AWS Cost Explorer, budgets, reservations, and third-party FinOps tools. Exposure to Infrastructure Monitoring & Observability tools like Azure Monitor, AWS CloudWatch, Prometheus, Grafana, and Datadog. Understanding of cloud governance frameworks, including Azure Policy, AWS Config, Blueprints, and enterprise-scale landing zones. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Infrastructure Architects are the key link between Kyndryl and our clients. You’re in a technical leadership role, uniting and guiding stakeholders from clients, governance, and project executives to delivery and sometimes even the vendors who work with the client. You’ll be there from the start of a project — understanding what’s needed and figuring out the best technical solution. And you’ll be there at the finish, delivering the right product on time and within budget. As an Infrastructure Architect, you’ll draw upon the full breadth of your talent and experience. This is a technical leadership role, so we want you to bring your vision, knowledge, and leadership to each project. To the client, you’re the subject matter expert – consulting early, gathering inputs, understanding what they need from our solution. You define what Kyndryl can do to meet this solution. You design the best solution for the job. And finally, you’re the tech leader for implementation. At Kyndryl we support all major cloud platforms, so you’ll get the chance to use everything you know – and then some. You’ll also become expert at knowing when and how to call on other SMEs outside your wheelhouse. Thinking your way around pre-existing limitations will grow your creativity and flexibility. You’ll learn a lot here, and if you want to work toward certifications there are plenty of opportunities.The rewards for all this are many. You’ll get to influence, create, and deliver something from start to finish. You will have the power to delight our clients. Your future at Kyndryl This role opens the door to many career paths, both vertical and horizontal, and there may be opportunity to travel. It’s a great chance for database administrators or other techs to break into the cloud. It’s also a solid path to become enterprise or chief architect or a distinguished engineer! Whatever you see for yourself, you’ll find the opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: 5+ years Hands-on experience in cloud infrastructure setup, management, and optimization across Azure and AWS, including multi-account/subscription environments. Expertise in Infrastructure as Code (IaC) using Terraform, Bicep, and CloudFormation to provision and manage scalable cloud environments. Proficiency in CI/CD tools such as GitHub Actions, GitLab CI/CD, and Azure DevOps Pipelines, with strong understanding of pipeline design, artifacts, and release strategies. Strong scripting skills in Python, PowerShell, and Bash for automation, integration, and task orchestration. Experience with event-driven compute and serverless technologies, especially Azure Functions and AWS Lambda, for automation and microservices implementation. Good understanding of RESTful APIs and integration patterns, with practical experience in building and consuming APIs for automation, monitoring, and data flow. Working knowledge of containerization and orchestration tools like Docker and Kubernetes (AKS/EKS), with understanding of deployment, scaling, and monitoring. Proficiency in YAML and JSON for configuration management, deployment definitions, and data serialization in IaC and CI/CD contexts. Deep understanding of DevOps principles and practices, including shift-left testing, GitOps, infrastructure automation, and feedback loops. Experience with secret management tools such as Azure Key Vault, AWS Secrets Manager, and integration with pipelines and workloads. Strong analytical and troubleshooting skills to diagnose and resolve infrastructure, deployment, and performance issues across complex cloud environments. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams, business stakeholders, and engineering leaders. Team-oriented mindset, with a focus on shared ownership, collaborative problem-solving, and knowledge sharing. Continuous learning attitude, with a strong interest in staying updated with the latest cloud technologies, industry trends, and platform innovations. Preferred Technical And Professional Experience Experience managing and optimizing multi-cloud environments, including resource governance, policy enforcement, and workload placement strategies. In-depth knowledge of serverless architecture, leveraging services like Azure Functions, AWS Lambda, Event Grid, and Step Functions for scalable and cost-efficient solutions. Familiarity with API gateway services, such as Azure API Management and AWS API Gateway, including securing, versioning, and monitoring APIs. Knowledge of cloud cost optimization tools and techniques, including Azure Cost Management, AWS Cost Explorer, budgets, reservations, and third-party FinOps tools. Exposure to Infrastructure Monitoring & Observability tools like Azure Monitor, AWS CloudWatch, Prometheus, Grafana, and Datadog. Understanding of cloud governance frameworks, including Azure Policy, AWS Config, Blueprints, and enterprise-scale landing zones. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Infrastructure Architects are the key link between Kyndryl and our clients. You’re in a technical leadership role, uniting and guiding stakeholders from clients, governance, and project executives to delivery and sometimes even the vendors who work with the client. You’ll be there from the start of a project — understanding what’s needed and figuring out the best technical solution. And you’ll be there at the finish, delivering the right product on time and within budget. As an Infrastructure Architect, you’ll draw upon the full breadth of your talent and experience. This is a technical leadership role, so we want you to bring your vision, knowledge, and leadership to each project. To the client, you’re the subject matter expert – consulting early, gathering inputs, understanding what they need from our solution. You define what Kyndryl can do to meet this solution. You design the best solution for the job. And finally, you’re the tech leader for implementation. At Kyndryl we support all major cloud platforms, so you’ll get the chance to use everything you know – and then some. You’ll also become expert at knowing when and how to call on other SMEs outside your wheelhouse. Thinking your way around pre-existing limitations will grow your creativity and flexibility. You’ll learn a lot here, and if you want to work toward certifications there are plenty of opportunities.The rewards for all this are many. You’ll get to influence, create, and deliver something from start to finish. You will have the power to delight our clients. Your future at Kyndryl This role opens the door to many career paths, both vertical and horizontal, and there may be opportunity to travel. It’s a great chance for database administrators or other techs to break into the cloud. It’s also a solid path to become enterprise or chief architect or a distinguished engineer! Whatever you see for yourself, you’ll find the opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: 5+ years Hands-on experience in cloud infrastructure setup, management, and optimization across Azure and AWS, including multi-account/subscription environments. Expertise in Infrastructure as Code (IaC) using Terraform, Bicep, and CloudFormation to provision and manage scalable cloud environments. Proficiency in CI/CD tools such as GitHub Actions, GitLab CI/CD, and Azure DevOps Pipelines, with strong understanding of pipeline design, artifacts, and release strategies. Strong scripting skills in Python, PowerShell, and Bash for automation, integration, and task orchestration. Experience with event-driven compute and serverless technologies, especially Azure Functions and AWS Lambda, for automation and microservices implementation. Good understanding of RESTful APIs and integration patterns, with practical experience in building and consuming APIs for automation, monitoring, and data flow. Working knowledge of containerization and orchestration tools like Docker and Kubernetes (AKS/EKS), with understanding of deployment, scaling, and monitoring. Proficiency in YAML and JSON for configuration management, deployment definitions, and data serialization in IaC and CI/CD contexts. Deep understanding of DevOps principles and practices, including shift-left testing, GitOps, infrastructure automation, and feedback loops. Experience with secret management tools such as Azure Key Vault, AWS Secrets Manager, and integration with pipelines and workloads. Strong analytical and troubleshooting skills to diagnose and resolve infrastructure, deployment, and performance issues across complex cloud environments. Excellent verbal and written communication skills, with the ability to collaborate effectively with cross-functional teams, business stakeholders, and engineering leaders. Team-oriented mindset, with a focus on shared ownership, collaborative problem-solving, and knowledge sharing. Continuous learning attitude, with a strong interest in staying updated with the latest cloud technologies, industry trends, and platform innovations. Preferred Technical and Professional Experience: Experience managing and optimizing multi-cloud environments, including resource governance, policy enforcement, and workload placement strategies. In-depth knowledge of serverless architecture, leveraging services like Azure Functions, AWS Lambda, Event Grid, and Step Functions for scalable and cost-efficient solutions. Familiarity with API gateway services, such as Azure API Management and AWS API Gateway, including securing, versioning, and monitoring APIs. Knowledge of cloud cost optimization tools and techniques, including Azure Cost Management, AWS Cost Explorer, budgets, reservations, and third-party FinOps tools. Exposure to Infrastructure Monitoring & Observability tools like Azure Monitor, AWS CloudWatch, Prometheus, Grafana, and Datadog. Understanding of cloud governance frameworks, including Azure Policy, AWS Config, Blueprints, and enterprise-scale landing zones. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

5.0 - 31.0 years

9 - 17 Lacs

Bengaluru/Bangalore

On-site

Job Title: AWS DevOps Engineer – EKS & Mobile CI/CD Experience: 4–8 years Employment Type: Full-Time Job Summary: We are seeking a skilled AWS DevOps Engineer with deep expertise in Amazon EKS, cloud infrastructure, and mobile application CI/CD automation. The ideal candidate will be responsible for building and managing scalable infrastructure and delivery pipelines, especially for containerized microservices and mobile applications. Key Responsibilities: Design, implement, and maintain scalable AWS infrastructure including EKS, S3, API Gateway, DynamoDB, and Aurora. Manage IAM roles, VPC configurations, and other networking aspects within AWS. Build and manage DevOps pipelines for EKS-based micro frontends and backend services. Automate mobile app deployment processes to Apple TestFlight and Google Play Store. Implement monitoring and alerting solutions using tools such as AppDynamics (AppD). Write and maintain infrastructure automation scripts using Terraform, CloudFormation, or similar tools. Develop scripts and automation using Bash, Python, or similar languages. Operate and support containerized workloads using Docker and Kubernetes, with a focus on production-grade environments. Ensure best practices in service discovery, service mesh, scaling, and load balancing in EKS. Required Skills and Qualifications: AWS Expertise: Strong hands-on experience with EKS, S3, API Gateway, DynamoDB, Aurora. In-depth knowledge of IAM, VPC, and AWS networking services. DevOps & CI/CD: Proven ability to build and manage CI/CD pipelines for both web and mobile applications. Experience automating mobile deployments to TestFlight and Google Play. Proficient with monitoring tools like AppDynamics. Programming and Scripting: Proficient in Bash, Python, or equivalent scripting languages. Experience with Infrastructure as Code using Terraform or CloudFormation. Microservices and Containers: Hands-on with Docker and Kubernetes in production environments. Knowledge of service discovery, service mesh, scaling, and load balancing concepts within EKS. Preferred Qualifications: AWS Certifications (e.g., AWS Certified DevOps Engineer, Solutions Architect) Familiarity with GitOps tools and methodologies Experience working in Agile/Scrum environments

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

India

Remote

Company Description At ZineIQ, we work with Governments across the world to create software that impact human lives. You will collaborate with cross-functional teams to create and improve new / existing infrastructure, troubleshoot and solve issues, and contribute to a robust and scalable platform. Required Skill: Primary : AWS, NodeJs, ReactJs This opportunity is only for Candidates with expertise in AWS, NodeJs and ReactJs. ONLY APPLY if you have real experience with AWS DevSecOps, Node.Js and React.Js. Only candidates with demonstrable experience in these 3 technologies will be considered. Role Description This is a full-time remote role for a Senior Engineer (DevSecOps - AWS) at ZineIQ. The Senior Engineer will be responsible for implementing and managing security practices within the AWS cloud environment, developing automation tools for security processes, and collaborating with engineering teams to ensure secure development practices. What you will be working on: · Design, implement, and maintain CI/CD pipelines using Jenkins and other modern tools · Architect and manage cloud infrastructure using Infrastructure as Code (IaC) principles with Terraform/Terragrunt · Implement GitOps workflows to ensure infrastructure and application deployments are version-controlled and automated · Manage and optimize AWS cloud services and resources · Design and implement containerization strategies using Docker and container orchestration platforms · Configure and maintain networking infrastructure, including VPCs, security groups, and load balancers · Collaborate with development teams to improve deployment processes and system reliability · Implement monitoring, logging, and alerting solutions · Troubleshoot and resolve infrastructure and deployment issues · Document infrastructure, processes, and best practices · Mentor junior DevOps engineers and provide technical guidance What we are looking for: · 5+ years of experience in DevOps, SRE, or similar roles · Advanced proficiency with Jenkins, including pipeline configuration and administration · Extensive experience with Terraform and Terragrunt for infrastructure provisioning · Strong understanding of GitOps principles and implementation strategies · Deep knowledge of AWS services including but not limited to: - EC2, ECS, EKS - VPC, Route53, CloudFront - S3, RDS, DynamoDB - IAM, Security Groups, KMS - CloudFormation, CloudWatch · Extensive experience with Docker containerization and best practices · Proficiency with container orchestration platforms (Kubernetes, ECS) · Strong understanding of networking concepts (TCP/IP, DNS, HTTP/S, load balancing) · Solid understanding of CI/CD best practices and implementation patterns · Experience with monitoring and observability tools (Datadog, Prometheus, Grafana, ELK stack) · Proficient in at least one scripting/programming language (Python, Go, Bash) · Knowledge of security best practices for cloud environments · Experience with high-availability and disaster recovery strategies Preferred Qualifications: · AWS certifications (Solutions Architect, DevOps Engineer) - Much Preferred · Experience with microservices architecture · Knowledge of service mesh technologies · Experience with serverless architecture and AWS Lambda · Familiarity with infrastructure security and compliance requirements · Contributions to open-source projects · Experience with multi-cloud environments · Knowledge of cost optimization strategies for cloud resources Technical Skills: · CI/CD: Jenkins, GitHub Actions, GitLab CI · Infrastructure as Code: Terraform, Terragrunt, CloudFormation · Version Control: Git, GitHub, GitLab, Bitbucket · Cloud Platforms: AWS (primary), Azure, GCP · Containerization: Docker, Docker Compose, Docker Swarm · Container Orchestration: Kubernetes, ECS · Monitoring & Logging: Prometheus, Grafana, ELK Stack, CloudWatch · Networking: VPCs, Subnets, Security Groups, Load Balancers, CDNs · Scripting/Programming: Python, Bash, Go · Databases: Experience managing RDS, DynamoDB, MongoDB, PostgreSQL Soft Skills: · Strong problem-solving abilities and analytical thinking · Excellent communication skills and ability to explain complex technical concepts · Self-motivated with a passion for learning new technologies · Ability to work effectively in a team environment · Attention to detail and commitment to quality · Strong documentation skills · Time management and ability to prioritize multiple tasks Qualifications Experience with DevSecOps practices, AWS, and cloud security Proficiency in scripting languages - Terraform, Python (Boto) and Shell scripting Strong knowledge of security tools and practices Experience in implementing automation tools for security processes Excellent problem-solving and analytical skills Ability to work independently and in a team environment Bachelor's or Master's degree in Computer Science or related field Certifications such as AWS Certified Security - Specialty are a plus Design and Develop: Implement server-side logic in NodeJs, Define and maintain databases, and ensure high performance and responsiveness. API Management: Develop and maintain RESTful APIs to support various client-side applications and integrate with third-party services. Database Management: Design, implement, and optimize database schemas, perform migrations, backups, and restorations. Performance Optimization: Analyze and enhance application performance, implement caching strategies, and optimize SQL queries. Security: Implement data protection measures, secure APIs, and comply with industry best practices for cybersecurity. Collaboration: Work closely with front-end developers, product managers, and other stakeholders to deliver high-quality products. Testing and Debugging: Write and maintain unit tests, debug issues, and ensure the reliability of backend systems. Documentation: Document processes, code, and APIs for internal use and external partners. Continuous Improvement: Stay updated with emerging technologies, propose improvements, and continuously seek ways to enhance the backend infrastructure. Required Skills and Qualifications: Interview Format: First - Introductory and Technical Questions Second - Technical Questions, Live Coding, AWS and Security Practices.

Posted 2 weeks ago

Apply

2.0 years

5 - 7 Lacs

Pune

On-site

Job Applicant Privacy Notice DGW Kubernetes Expert Publication Date: Jul 10, 2025 Ref. No: 534528 Location: Pune, IN Who we are. We are a team of passionate experts with a clear ambition: applying digital technology to advance what matters for our clients and society. Together we create reliable and responsive digital foundations for the world’s businesses, institutions, and communities. Learn more on Advancing what matters Atos is seeking a highly skilled and experienced Kubernetes Expert with strong programming skills to join our dynamic team. As a Kubernetes Expert, you will play a crucial role in designing, implementing, and maintaining our Kubernetes infrastructure to ensure scalability, reliability, and efficiency of our services. Responsibilities: Develop and maintain Kubernetes clusters for open-source applications (like Apache Nifi, Apache Airflow), ensuring high availability, scalability, and security. Deploy, configure, and manage clusters on Kubernetes, including setting up leader election, shared state management, and clustering. Utilize ArgoCD for GitOps continuous delivery, automating the deployment of applications and resources within the Kubernetes environment. Use Crossplane to manage cloud resources and services, ensuring seamless integration and provisioning. Implement and manage identity and access management using Keycloak, ensuring secure access to the application. Utilize Azure Vault for securely storing and managing sensitive information such as API keys, passwords, and other secrets required for data workflows. Manage ingress traffic to the application using Kong, providing features such as load balancing, security, and monitoring of API requests. Ensure the availability and management of persistent block storage for various application repositories. Set up and manage certificates using Cert-Manager and Trust-Manager to establish secure connections between the applications. Implement monitoring and observability solutions to ensure the health and performance of the application and its underlying infrastructure. Troubleshoot and resolve issues related to Kubernetes infrastructure, including performance bottlenecks, resource constraints, and network connectivity. Implement security best practices for Kubernetes environments, including RBAC, network policies, secrets management, and define strategy to integrate security with various virtualization environment service providers like VMware or cloud hyperscalers. Stay updated with the latest Kubernetes features, tools, and technologies, and evaluate their applicability to improve our infrastructure and workflows. Mentor and train team members on Kubernetes concepts, best practices, and tools. Contribute to the development and maintenance of internal documentation, runbooks, and knowledge base articles related to Kubernetes. Requirements: Bachelor's degree in Computer Science, Engineering, or related field. Master's degree preferred. 2+ years of experience in designing, deploying, and managing Kubernetes clusters in production environments. Solid experience with infrastructure-as-code tools such as Crossplane. Proficiency in Kubernetes and container orchestration. Knowledge of Apache NiFi 2.0, including clustering and data flow management. Familiarity with GitOps practices and tools like ArgoCD. Experience with container monitoring and logging tools such as Prometheus and Grafana. Solid understanding of networking principles, including DNS, load balancing, and security in Kubernetes environments. Experience with identity and access management tools like Keycloak. Proficiency in secrets management using tools like Azure Vault. Experience with API gateway management using Kong. Knowledge of persistent storage solutions for Kubernetes. Experience with certificate management using Cert-Manager and Trust-Manager. Preferred Qualifications: Kubernetes certification (e.g., Certified Kubernetes Administrator - CKA, Certified Kubernetes Application Developer – CKAD, Certified Kubernetes Security Specialist – CKS). Familiarity with CI/CD pipelines and tools such as GitHub. Knowledge of software-defined networking (SDN) solutions for Kubernetes. Contributions to open-source projects related to Kubernetes or containerization technologies. Join us at Atos and be part of a collaborative team that is shaping the future of hybrid cloud infrastructure with Kubernetes expertise and programming skills. Apply now to embark on an exciting journey of innovation and growth! Learn more about us At Atos, we embrace diversity as the ultimate engine of ingenuity for our clients, and we constantly strive to create a culture where people feel supported and encouraged. Read more about our commitment here. Whether it is fighting climate change, promoting digital inclusion, or ensuring trust in data management – tech for good sits at the core of our identity. With numerous global recognitions for our ESG practices, we are committed to building a better future for all by harnessing the power of technology. Learn more here

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title : GCP Data Architect Location : Madurai Experience : 12+ Years Notice Period : Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3-5 years of hands-on experience in GCP Data Service Proficient in : BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles What We Offer Opportunity to work on large-scale data modernization projects with global clients A fast-growing company with a strong tech and people culture Competitive salary, benefits, and flexibility Collaborative environment that values innovation and leadership (ref:hirist.tech)

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Company Description Confluentis Consulting LLP is a global business consulting firm that helps clients succeed in any business environment through insightful perspectives, focusing on strengths, capabilities, and innovation for the future. We guide clients through exploring their near future, identifying shifts and defining a roadmap that can guide their transformation. Role Description This is a full-time hybrid role for a DevSecOps Engineer located in Bengaluru with flexibility for remote work. We are looking for an experienced and hands on Cloud DevSecOps engineer for a client facing engagement. Kindly send your profiles to info@confluentis.co.in. Specify the Job title as “ DevSecOps Engineer " in your email subject. The base location will be Bangalore. Hybrid working mode available. This requirement is more app-down rather than infra-up . Qualifications •Bachelor's degree in Computer Science, Information Technology or a related field. Relevant Certifications preferred ( Azure ). • At least 3-5 years of experience in managing platforms and operations using Azure PaaS services. • Must have hands on experience with Azure DevOps and must have hands on experience with Azure Kubernetes Services. •Release Management Processes, Coordination and Planning Process and connecting points with DevOps and CI,CD. Must have experience working with Terraform. • Linux System admin knowledge . Expert in containerized environments and technologies Docker, Kubernetes. Experience with version control systems Git, Gitlab. Experience with GitOps. Experience with YAML, JSON, PowerShell Bash scripting . •Exposure of working on Azure pipeline setup activities like setting up build agent, deployment monitoring of Azure DevOps agents. •Hands on with build packs and package manager for code deployment and setting up environment variables. •Candidates with Microsoft AZ-204 and AZ-400 exam certification will be given preference. Candidates with CKA (Certified Kubernetes Administrator) and CKAD (Certified Kubernetes Application Developer) certification will be given preference. •Knowledge of DevOps principles and experience in working with Agile and Scrum methodologies Experience working with operational programming tasks, such as version control, CI/CD, testing and quality assurance. Experience with cloud platforms and containerization is a plus. Key Responsibilities: Infrastructure Deployment & Management : Design, deploy, and maintain Azure cloud infrastructure using Terraform. Implement scalable and highly available solutions with a focus on performance optimization. Manage Azure services, including Virtual Machines, AKS, ACR, VNet, Storage Accounts, Key Vault, App Services, and Load Balancers. Automation and DevOps : Create and manage CI/CD pipelines using Azure DevOps, GitHub Actions, or similar tools. Automate deployment workflows and infrastructure provisioning processes. Implement monitoring and logging solutions using Azure Monitor, Log Analytics, and Application Insights. Security and Compliance : Configure Azure policies, role-based access control (RBAC), and Key Vault integration. Ensure infrastructure adheres to organizational security and compliance standards. Manage Azure Private Endpoints and vNet integration for secure connectivity. Collaboration : Collaborate with development, operations, and security teams to deliver end-to-end solutions. Provide support for troubleshooting infrastructure and application issues. Documentation : Maintain comprehensive documentation for infrastructure and Terraform configurations. Document standard operating procedures (SOPs) and best practices.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: As a Cloud Platform Architect, you will work under the guidance of the Cloud Infrastructure Architect to design, enhance, and maintain our Azure-based platform architecture. You’ll contribute to the platform’s reliability, scalability, and security, and support a DevSecOps culture through strong collaboration, infrastructure-as-code practices, and continuous improvement of the tooling and automation landscape. Key Responsibilities Assist in the design and implementation of platform components within Azure, including AKS, network configurations, and supporting cloud-native services. Maintain and enhance Kubernetes infrastructure and GitOps tooling (e.g., Flux). Collaborate with DevOps, Site Reliability Engineers, and Cloud Operations teams to ensure seamless delivery and support of platform capabilities. Implement infrastructure-as-code (IaC) using tools such as Bicep or Terraform. Participate in architectural reviews and contribute to technical documentation and standards. Monitor platform performance and recommend optimizations for cost and reliability. Ensure that platform deployments align with security, governance, and compliance frameworks. Support incident response, troubleshooting, and root cause analysis for platform-related issues. Skills And Qualifications 3–5 years of experience in cloud engineering or architecture roles. Hands-on experience with Azure services, especially AKS, networking, and security configurations. Familiarity with GitOps practices using tools like Flux or ArgoCD. Experience with IaC tools like Terraform or Bicep. Proficiency in scripting languages such as PowerShell, Bash, or Python. Understanding of containerization and orchestration (Docker, Kubernetes). Basic familiarity with CI/CD pipelines and DevOps workflows. Strong problem-solving and communication skills. Able to work in a collaborative, fast-paced environment. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world.

Posted 2 weeks ago

Apply

12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description Hiring SRE Lead for the Hyderabad location Requirements Qualifications: Bachelor’s or Master’s degree in Computer Science, Information Systems, Engineering, or a related technical field. 12+ years of total experience in infrastructure, platform engineering, or software development roles, including at least 3–5 years in an SRE or DevOps leadership role. Deep understanding of Linux/Unix systems, networking fundamentals, and containerized environments (Docker, Kubernetes). Proven experience managing large-scale production systems, including high-availability, distributed, and event-driven architectures. Strong hands-on experience with cloud platforms such as AWS, GCP, or Azure and infrastructure-as-code tools (e.g., Terraform, CloudFormation). Proficiency in at least one scripting or programming language (Python, Go, Shell, Java, etc.). Demonstrated experience building observability solutions (metrics, logs, traces) and integrating them into proactive monitoring and alerting systems. Solid understanding of incident response practices, runbook automation, on-call rotation management, and disaster recovery planning. Familiarity with modern CI/CD tools (Jenkins, GitLab CI, Argo CD, Spinnaker) and release automation best practices. Strong problem-solving and debugging skills, especially in high-pressure, production-critical environments. Excellent leadership, communication, and cross-functional collaboration skills. Job responsibilities Responsibilities: Lead the SRE function, owning end-to-end service reliability, observability, incident management, capacity planning, and production readiness. Establish SLOs, SLIs, and error budgets in collaboration with product and engineering teams to drive service quality goals. Build and maintain highly available, fault-tolerant, and self-healing infrastructure leveraging IaC, automation, and scalable architectures. Design and implement monitoring, alerting, and observability platforms using tools like Prometheus, Grafana, Datadog, ELK/EFK stack, or equivalent. Drive the evolution of CI/CD pipelines, release automation, and safe deployment practices using GitOps or similar methodologies. Lead and refine the incident management lifecycle, including root cause analysis (RCA), incident postmortems, and production runbooks. Optimize cost, performance, and scalability of cloud infrastructure across hybrid or multi-cloud environments (AWS, GCP, Azure). Champion DevSecOps and SRE best practices, advocating for early detection, chaos engineering, and continuous improvement in resilience engineering. Mentor and develop a team of SREs and platform engineers; conduct performance reviews and technical coaching. Serve as a key advisor in architectural reviews to ensure systems are built with reliability, scalability, and observability in mind. Maintain strong partnerships with Security, Product, QA, and Engineering teams to support agile development and delivery. What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services.

Posted 2 weeks ago

Apply

5.0 years

0 Lacs

Nasik, Maharashtra, India

On-site

Driven by transformative digital technologies and trends, we are RIB and we’ve made it our purpose to propel the industry forward and make engineering and construction more efficient and sustainable. Built on deep industry expertise and best practice, and with our people at the heart of everything we do, we deliver the world's leading end-to-end lifecycle solutions that empower our industry to build better. With a steadfast commitment to innovation and a keen eye on the future, RIB comprises over 2,500 talented individuals who extend our software’s reach to over 100 countries worldwide. We are experienced experts and professionals from different cultures and backgrounds and we collaborate closely to provide transformative software products, innovative thinking and professional services to our global market. Our strong teams across the globe enable sustainable product investment and enhancements, to keep our clients at the cutting-edge of engineering, infrastructure and construction technology. We know our people are our success – join us to be part of a global force that uses innovation to enhance the way the world builds. Find out more at RIB Careers. Job Description Job Summary: As a Cloud Platform Architect, you will work under the guidance of the Cloud Infrastructure Architect to design, enhance, and maintain our Azure-based platform architecture. You’ll contribute to the platform’s reliability, scalability, and security, and support a DevSecOps culture through strong collaboration, infrastructure-as-code practices, and continuous improvement of the tooling and automation landscape. Key Responsibilities Assist in the design and implementation of platform components within Azure, including AKS, network configurations, and supporting cloud-native services. Maintain and enhance Kubernetes infrastructure and GitOps tooling (e.g., Flux). Collaborate with DevOps, Site Reliability Engineers, and Cloud Operations teams to ensure seamless delivery and support of platform capabilities. Implement infrastructure-as-code (IaC) using tools such as Bicep or Terraform. Participate in architectural reviews and contribute to technical documentation and standards. Monitor platform performance and recommend optimizations for cost and reliability. Ensure that platform deployments align with security, governance, and compliance frameworks. Support incident response, troubleshooting, and root cause analysis for platform-related issues. Skills And Qualifications 3–5 years of experience in cloud engineering or architecture roles. Hands-on experience with Azure services, especially AKS, networking, and security configurations. Familiarity with GitOps practices using tools like Flux or ArgoCD. Experience with IaC tools like Terraform or Bicep. Proficiency in scripting languages such as PowerShell, Bash, or Python. Understanding of containerization and orchestration (Docker, Kubernetes). Basic familiarity with CI/CD pipelines and DevOps workflows. Strong problem-solving and communication skills. Able to work in a collaborative, fast-paced environment. RIB may require all successful applicants to undergo and pass a comprehensive background check before they start employment. Background checks will be conducted in accordance with local laws and may, subject to those laws, include proof of educational attainment, employment history verification, proof of work authorization, criminal records, identity verification, credit check. Certain positions dealing with sensitive and/or third party personal data may involve additional background check criteria. RIB is an Equal Opportunity Employer. We are committed to being an exemplary employer with an inclusive culture, developing a workplace environment where all our employees are treated with dignity and respect. We value diversity and the expertise that people from different backgrounds bring to our business. Come and join RIB to create the transformative technology that enables our customers to build a better world.

Posted 2 weeks ago

Apply

10.0 years

4 - 8 Lacs

Madurai

On-site

Job Location: Madurai Job Experience: 10-20 Years Model of Work: Work From Office Technologies: GCP Functional Area: Software Development Job Summary: Job Title: GCP Data Architect Location: Madurai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client in USA and its a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3–5 years of hands-on experience in GCP Data Service Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications: GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles About our Talent Acquisition Team: Arumugam Veera leads the Talent Acquisition function for both TechMango and Bautomate - SaaS Platform , driving our mission to build high-performing teams and connect top talent with exciting career opportunities. Feel free to connect with him on LinkedIn : https://www.linkedin.com/in/arumugamv/ Follow our official TechMango LinkedIn page for the latest job updates and career opportunities: https://www.linkedin.com/company/techmango-technology-services-private-limited/ Looking forward to connecting and helping you explore your next great opportunity with us!

Posted 3 weeks ago

Apply

25.0 years

0 Lacs

Pune, Maharashtra, India

On-site

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for over 25 years through groundbreaking technology and exceptional people. Today, we’re harnessing AI to define the next era of computing, where our GPUs power computers, robots, and self-driving cars that understand the world. Doing what’s never been done takes vision, innovation, and world-class talent. As an NVIDIAN, you’ll thrive in a diverse, supportive environment where everyone is inspired to do their best work. Join us and make a lasting impact. NVIDIA is seeking a passionate Senior Dev Ops Engineer to join our Cloud Engineering Team in GeForce NOW. In this role, you’ll help shape the future of cloud gaming. GeForce NOW streams the highest-quality games to any device—PCs, Macs, or mobile—using advanced GPUs and software for a seamless, near-instant experience. Learn more at https://www.nvidia.com/en-us/geforce/products/geforce-now/ . The world’s largest gaming cloud demands top-tier operations, state-of-the-art deployment infrastructure, and tools that empower developers to deliver at speed. If this excites you, read on! What You’ll Be Doing You will play a crucial role in ensuring the success of the GeForce Now (GFN) Cloud platform by helping to build our development and release processes, creating world-class performance, quality measurement and regression management tools, and maintaining a high standard of excellence in our CI/CD and release engineering tools and processes. Design, build, and implement scalable cloud-based systems for PaaS/IaaS. Collaborate with cross-time-zone teams to understand requirements and feedback, and develop new products and enhancements to existing ones. Work with development teams to identify and automate operational inefficiencies. Develop, maintain, and improve CI/CD tools for on-prem and cloud deployment of our microservices. Collaborate with developers to establish, refine, and streamline our software release process. Build and maintain automated testing frameworks. Implement robust observability (logs, metrics, tracing) for proactive issue detection. Helps define the long term vision/roadmap of end-to-end release process while both utilizing state-of-art industry standards and pioneering new paradigms. What We Need To See Bachelor’s or Master’s degree in Computer Science, Software Engineering, or equivalent, with minimum 8+ years of relevant industry experience. Strong understanding of cloud design, global deployments, distributed systems, and security compliance. Proven expertise managing large-scale, high-availability production environments with observability, telemetry, and alerting. Skilled in Docker, Kubernetes, and KubeVirt for containerized workloads. Proficient in CI/CD pipelines using Jenkins, CloudBees, GitLab, and GitOps tools such as Flux CD and Argo CD. Experienced with IaC tools like Terraform, configuration management with Kubernetes Config Management and Ansible, and secret management using HashiCorp Vault. Experienced in developing automation frameworks along with good knowledge of RESTful web services. Collaborative team player with excellent communication skills and a proactive approach, including flexibility to support projects during critical or off-hours as needed. Ways To Stand Out From The Crowd Proficient in Go, Python, Bash, and Groovy for writing controllers and automation scripts. Experience unifying infrastructure tools, services, and documentation using platforms like Backstage. Strong knowledge of observability tools such as Grafana, Prometheus, and the ELK stack, including implementing telemetry, alerting, and dashboards. Hands-on experience with public cloud platforms (AWS, Azure, or GCP) as well as bare-metal and on-premises infrastructure. NVIDIA is widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people on the planet working for us. If you're creative and autonomous, we want to hear from you! You will also be eligible for equity and benefits . NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law. JR2000064

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

telangana

On-site

As a Senior Software Engineer II at Marriott Tech Accelerator in Hyderabad, India, you will play a crucial role in leading the design, solutioning, and delivery of large-scale enterprise applications. Your primary focus will be on product development and solving complex problems with innovative solutions. Your responsibilities will include providing technical leadership by training and mentoring team members, offering financial input on budgets, and identifying opportunities to enhance service delivery processes. You will be responsible for delivering technology by conducting quantitative and qualitative analyses, ensuring project completion within scope, and coordinating with IT and vendor relations teams. In terms of IT governance, you will adhere to defined standards and processes while maintaining a balance between business and operational risk. You will also be involved in service provider management, validating project plans, monitoring outcomes, and resolving service delivery problems promptly. To excel in this role, you should have 6-8 years of software development experience with a deep understanding of integration patterns, API management platforms, and various design architectures. Your expertise should cover a wide range of technologies including Java, GraalVM, NoSQL, Spring Boot, Docker, Kubernetes, AWS, and more. Additionally, you should have hands-on experience with DevOps, CI/CD pipelines, infrastructure components, and cloud-native design patterns. Your background should include leading integration solutions development, architecting distributed systems, and working with microservices and serverless technologies. Ideally, you should have a bachelor's degree or equivalent experience/certification. Your ability to work in a hybrid mode and collaborate effectively in an agile development environment with a mix of onshore and offshore teams will be crucial for success in this role. If you are a results-oriented individual with a passion for cutting-edge technology and a track record of technology leadership, we encourage you to apply for this exciting opportunity at Marriott Tech Accelerator.,

Posted 3 weeks ago

Apply

0 years

10 - 15 Lacs

Hyderabad, Telangana, India

On-site

The ideal candidate is a Senior Site Reliability Engineer with strong expertise in CI/CD pipeline design, infrastructure automation, and backend service development. They have hands-on experience with Node.js , Python scripting , and managing large-scale Kubernetes clusters . The candidate is well-versed in AWS cloud infrastructure , including AWS CDK , and has a deep understanding of DevOps and security best practices . Familiarity with ArgoCD , Kustomize , and GitOps workflows is a strong advantage. They should also be capable of monitoring and optimizing system performance, ensuring reliability and scalability across environments, and collaborating with cross-functional teams. Responsibilities Lead the design and implementation of CI/CD pipelines to streamline deployment processes. Develop and maintain backend services using Node.js, focusing on security and mitigating cyber vulnerabilities. Automate processes using Python scripting to build utilities that support CI/CD pipelines. Manage large-scale infrastructure and multiple Kubernetes clusters to ensure optimal performance and reliability. Implement AWS infrastructure solutions, utilizing AWS CDK and core AWS services to enhance our cloud capabilities. Collaborate with cross-functional teams to ensure seamless integration of services and infrastructure. Monitor system performance and troubleshoot issues to maintain high availability and reliability. Qualifications we seek in you! Minimum Qualifications / Skills Proven experience in a Senior SRE or similar role. Strong expertise in CI/CD deployments. Working knowledge of Python scripting for automation. Experience in developing and maintaining backend services using Node.js. Practical experience with AWS infrastructure, including strong working knowledge of AWS CDK and core AWS services. Preferred Qualifications/ Skills Familiarity with ArgoCD and Kustomize. Hands-on experience in managing large-scale infrastructure and multiple Kubernetes clusters. Strong understanding of security best practice in software development. Skills: kubernetes,node.js,aws

Posted 3 weeks ago

Apply

0 years

10 - 15 Lacs

Noida, Uttar Pradesh, India

On-site

The ideal candidate is a Senior Site Reliability Engineer with strong expertise in CI/CD pipeline design, infrastructure automation, and backend service development. They have hands-on experience with Node.js , Python scripting , and managing large-scale Kubernetes clusters . The candidate is well-versed in AWS cloud infrastructure , including AWS CDK , and has a deep understanding of DevOps and security best practices . Familiarity with ArgoCD , Kustomize , and GitOps workflows is a strong advantage. They should also be capable of monitoring and optimizing system performance, ensuring reliability and scalability across environments, and collaborating with cross-functional teams. Responsibilities Lead the design and implementation of CI/CD pipelines to streamline deployment processes. Develop and maintain backend services using Node.js, focusing on security and mitigating cyber vulnerabilities. Automate processes using Python scripting to build utilities that support CI/CD pipelines. Manage large-scale infrastructure and multiple Kubernetes clusters to ensure optimal performance and reliability. Implement AWS infrastructure solutions, utilizing AWS CDK and core AWS services to enhance our cloud capabilities. Collaborate with cross-functional teams to ensure seamless integration of services and infrastructure. Monitor system performance and troubleshoot issues to maintain high availability and reliability. Qualifications we seek in you! Minimum Qualifications / Skills Proven experience in a Senior SRE or similar role. Strong expertise in CI/CD deployments. Working knowledge of Python scripting for automation. Experience in developing and maintaining backend services using Node.js. Practical experience with AWS infrastructure, including strong working knowledge of AWS CDK and core AWS services. Preferred Qualifications/ Skills Familiarity with ArgoCD and Kustomize. Hands-on experience in managing large-scale infrastructure and multiple Kubernetes clusters. Strong understanding of security best practice in software development. Skills: kubernetes,node.js,aws

Posted 3 weeks ago

Apply

0 years

10 - 15 Lacs

Pune, Maharashtra, India

On-site

The ideal candidate is a Senior Site Reliability Engineer with strong expertise in CI/CD pipeline design, infrastructure automation, and backend service development. They have hands-on experience with Node.js , Python scripting , and managing large-scale Kubernetes clusters . The candidate is well-versed in AWS cloud infrastructure , including AWS CDK , and has a deep understanding of DevOps and security best practices . Familiarity with ArgoCD , Kustomize , and GitOps workflows is a strong advantage. They should also be capable of monitoring and optimizing system performance, ensuring reliability and scalability across environments, and collaborating with cross-functional teams. Responsibilities Lead the design and implementation of CI/CD pipelines to streamline deployment processes. Develop and maintain backend services using Node.js, focusing on security and mitigating cyber vulnerabilities. Automate processes using Python scripting to build utilities that support CI/CD pipelines. Manage large-scale infrastructure and multiple Kubernetes clusters to ensure optimal performance and reliability. Implement AWS infrastructure solutions, utilizing AWS CDK and core AWS services to enhance our cloud capabilities. Collaborate with cross-functional teams to ensure seamless integration of services and infrastructure. Monitor system performance and troubleshoot issues to maintain high availability and reliability. Qualifications we seek in you! Minimum Qualifications / Skills Proven experience in a Senior SRE or similar role. Strong expertise in CI/CD deployments. Working knowledge of Python scripting for automation. Experience in developing and maintaining backend services using Node.js. Practical experience with AWS infrastructure, including strong working knowledge of AWS CDK and core AWS services. Preferred Qualifications/ Skills Familiarity with ArgoCD and Kustomize. Hands-on experience in managing large-scale infrastructure and multiple Kubernetes clusters. Strong understanding of security best practice in software development. Skills: kubernetes,node.js,aws

Posted 3 weeks ago

Apply

0 years

10 - 15 Lacs

Jaipur Tehsil, Rajasthan, India

On-site

The ideal candidate is a Senior Site Reliability Engineer with strong expertise in CI/CD pipeline design, infrastructure automation, and backend service development. They have hands-on experience with Node.js , Python scripting , and managing large-scale Kubernetes clusters . The candidate is well-versed in AWS cloud infrastructure , including AWS CDK , and has a deep understanding of DevOps and security best practices . Familiarity with ArgoCD , Kustomize , and GitOps workflows is a strong advantage. They should also be capable of monitoring and optimizing system performance, ensuring reliability and scalability across environments, and collaborating with cross-functional teams. Responsibilities Lead the design and implementation of CI/CD pipelines to streamline deployment processes. Develop and maintain backend services using Node.js, focusing on security and mitigating cyber vulnerabilities. Automate processes using Python scripting to build utilities that support CI/CD pipelines. Manage large-scale infrastructure and multiple Kubernetes clusters to ensure optimal performance and reliability. Implement AWS infrastructure solutions, utilizing AWS CDK and core AWS services to enhance our cloud capabilities. Collaborate with cross-functional teams to ensure seamless integration of services and infrastructure. Monitor system performance and troubleshoot issues to maintain high availability and reliability. Qualifications we seek in you! Minimum Qualifications / Skills Proven experience in a Senior SRE or similar role. Strong expertise in CI/CD deployments. Working knowledge of Python scripting for automation. Experience in developing and maintaining backend services using Node.js. Practical experience with AWS infrastructure, including strong working knowledge of AWS CDK and core AWS services. Preferred Qualifications/ Skills Familiarity with ArgoCD and Kustomize. Hands-on experience in managing large-scale infrastructure and multiple Kubernetes clusters. Strong understanding of security best practice in software development. Skills: kubernetes,node.js,aws

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Who We Are BCG partners with clients from the private, public, and not‐for profit sectors in all regions of the globe to identify their highest value opportunities, address their most critical challenges, and transform their enterprises. We work with the most innovative companies globally, many of which rank among the world’s 500 largest corporations. Our global presence makes us one of only a few firms that can deliver a truly unified team for our clients – no matter where they are located. Our ~22,000 employees, located in 90+ offices in 50+ countries, enable us to work in collaboration with our clients, to tailor our solutions to each organization. We value and utilize the unique talents that each of these individuals brings to BCG; the wide variety of backgrounds of our consultants, specialists, and internal staff reflects the importance we place on diversity. Our employees hold degrees across a full range of disciplines – from business administration and economics to biochemistry, engineering, computer science, psychology, medicine, and law. What You'll Do BCG X develops innovative and AI driven solutions for the Fortune 500 in their highest‐value use cases. The BCG X Software group productizes repeat use‐cases, creating both reusable components as well as single‐tenant and multi‐tenant SaaS offerings that are commercialized through the BCG consulting business. BCG X is currently looking for a Software Engineering Architect to drive impact and change for the firms engineering and analytics engine and bring new products to BCG clients globally. This Will Include Serving as a leader within BCG X and specifically the KEY Impact Management by BCG X Tribe (Transformation, Post-Merger-Integration related software and data products) overseeing the delivery of high-quality software: driving technical roadmap, architectural decisions and mentoring engineers Influencing and serving as a key decision maker in BCG X technology selection & strategy Active “hands-on” role, building intelligent analytical products to solve problems, write elegant code, and iterate quickly Overall responsibility for the engineering and architecture alignment of all solutions delivered within the tribe. Responsible for technology roadmap of existing and new components delivered. Architecting and implementing backend and frontend solutions primarily using .NET, C#, MS SQL Server, Angular, and other technologies best suited for the goals, including open source i.e. Node, Django, Flask, Python where needed. What You'll Bring 8+ years of technology and software engineering experience in a complex and fast-paced business environment (ideally agile environment) with exposure to a variety of technologies and solutions, with at least 5 year’ experience in Architect role. Experience with a wide range of Application and Data architectures, platforms and tools including: Service Oriented Architecture, Clean Architecture, Software as a Service, Web Services, Object-Oriented Languages (like C# or Java), SQL Databases (like Oracle or SQL Server), Relational, Non-relational Databases, Hands on experience with analytics tools and reporting tools, Data Science experience etc. Thoroughly up to date in technology: Modern cloud architectures including AWS, Azure, GCP, Kubernetes Very strong particularly in .NET, C#, MS SQL Server, Angular technologies Open source stacks including NodeJs, React, Angular, Flask are good to have CI/CD / DevSecOps / GitOps toolchains and development approaches Knowledge in machine learning & AI frameworks Big data pipelines and systems: Spark, Snowflake, Kafka, Redshift, Synapse, Airflow At least Bachelors degree; Master’s degree and/or MBA preferred Team player with excellent work habits and interpersonal skills Care deeply about product quality, reliability, and scalability Passion about the people and culture side of engineering teams Outstanding written and oral communications skills The ability to travel, depending on project requirements.#BCGXjob Boston Consulting Group is an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, age, religion, sex, sexual orientation, gender identity / expression, national origin, disability, protected veteran status, or any other characteristic protected under national, provincial, or local law, where applicable, and those with criminal histories will be considered in a manner consistent with applicable state and local laws. BCG is an E - Verify Employer. Click here for more information on E-Verify.

Posted 3 weeks ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: GCP Data Architect Location: Madurai, Chennai Experience: 12+ Years Notice Period: Immediate About TechMango TechMango is a rapidly growing IT Services and SaaS Product company that helps global businesses with digital transformation, modern data platforms, product engineering, and cloud-first initiatives. We are seeking a GCP Data Architect to lead data modernization efforts for our prestigious client, Livingston, in a highly strategic project. Role Summary As a GCP Data Architect, you will be responsible for designing and implementing scalable, high-performance data solutions on Google Cloud Platform. You will work closely with stakeholders to define data architecture, implement data pipelines, modernize legacy data systems, and guide data strategy aligned with enterprise goals. Key Responsibilities: Lead end-to-end design and implementation of scalable data architecture on Google Cloud Platform (GCP) Define data strategy, standards, and best practices for cloud data engineering and analytics Develop data ingestion pipelines using Dataflow, Pub/Sub, Apache Beam, Cloud Composer (Airflow), and BigQuery Migrate on-prem or legacy systems to GCP (e.g., from Hadoop, Teradata, or Oracle to BigQuery) Architect data lakes, warehouses, and real-time data platforms Ensure data governance, security, lineage, and compliance (using tools like Data Catalog, IAM, DLP) Guide a team of data engineers and collaborate with business stakeholders, data scientists, and product managers Create documentation, high-level design (HLD) and low-level design (LLD), and oversee development standards Provide technical leadership in architectural decisions and future-proofing the data ecosystem Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or enterprise data platforms Minimum 3–5 years of hands-on experience in GCP Data Service Proficient in:BigQuery, Cloud Storage, Dataflow, Pub/Sub, Composer, Cloud SQL/Spanner Python / Java / SQL Data modeling (OLTP, OLAP, Star/Snowflake schema) Experience with real-time data processing, streaming architectures, and batch ETL pipelines Good understanding of IAM, networking, security models, and cost optimization on GCP Prior experience in leading cloud data transformation projects Excellent communication and stakeholder management skills Preferred Qualifications: GCP Professional Data Engineer / Architect Certification Experience with Terraform, CI/CD, GitOps, Looker / Data Studio / Tableau for analytics Exposure to AI/ML use cases and MLOps on GCP Experience working in agile environments and client-facing roles What We Offer: Opportunity to work on large-scale data modernization projects with global clients A fast-growing company with a strong tech and people culture Competitive salary, benefits, and flexibility Collaborative environment that values innovation and leadership

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies