Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 9.0 years
0 - 0 Lacs
Hyderabad, Chennai
Work from Office
Role & responsibilities The Data Protection Engineer is a key member of the Data Protection team Responsible for the implementation, maintenance, and operational support of the organization's data protection infrastructure and applications. The Engineer will participate in complex projects, troubleshoot incidents, and contribute to the development and improvement of Data Protection engineering policies, standards, and procedures. They will work closely with senior engineers to implement and maintain data protection solutions. This role requires expertise in Microsoft Purview, ForcePoint, BigID, Varonis, Windows, Linux, GKE, Encryption, and other data loss prevention (DLP) and data security posture management (DSPM) tools.
Posted 6 days ago
6.0 - 9.0 years
8 - 11 Lacs
Pune
Work from Office
We are hiring a DevOps / Site Reliability Engineer for a 6-month full-time onsite role in Pune (with possible extension). The ideal candidate will have 69 years of experience in DevOps/SRE roles with deep expertise in Kubernetes (preferably GKE), Terraform, Helm, and GitOps tools like ArgoCD or Flux. The role involves building and managing cloud-native infrastructure, CI/CD pipelines, and observability systems, while ensuring performance, scalability, and resilience. Experience in infrastructure coding, backend optimization (Node.js, Django, Java, Go), and cloud architecture (IAM, VPC, CloudSQL, Secrets) is essential. Strong communication and hands-on technical ability are musts. Immediate joiners only.
Posted 1 week ago
10.0 - 20.0 years
25 - 40 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Required Skills & Qualifications: Experience: 8+ years in Cloud Engineering, with a focus on GCP. Cloud Expertise: Strong knowledge of GCP services (GKE, Compute Engine, IAM, VPC, Cloud Storage, Cloud SQL, Cloud Functions). Kubernetes & Containers: Experience with GKE, Docker, GKE Networking, Helm. DevOps Tools: Hands-on experience with Azure DevOps for CI/CD pipeline automation. Infrastructure-as-Code (IaC): Expertise in Terraform for provisioning cloud resources. Scripting & Automation: Proficiency in Python, Bash, or PowerShell for automation. Security & Compliance: Knowledge of cloud security principles, IAM, and compliance standards.
Posted 1 week ago
0.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant, G CP Senior Engineer! Have strong working experience on GCP and exposure to at least one public cloud platform of AWS/Azure Solid understanding of cloud services offered by GCP/AWS/Azure, specifically the infrastructure, security, IAM and application orchestration services along with managed services for building SaaS/PaaS applications Understands the Cloud Migration Framework, working knowledge with various tools, multi-cloud deployment, cloud native development, cloud migration patterns etc. Be able to participate in platform architecture & design Be able to independently analyse different cloud attributes and develop proofs of concept Very good knowledge of cloud migration patterns like Re-host, Re-platform, Re-factor and be able to apply the relevant patterns Good understanding of the networking aspects on cloud, defining network security groups, policies, configuring load balancers, high availability, API gateway, encryption at rest and transit Ability to learn/work with emerging technologies, methodologies, and solutions in the Cloud/IT technology space Ability to collaborate across organizational boundaries, build relationships, deliver results and work cross-functionally Responsibilities: This position will be responsible to work closely with Cloud Architects and IT, assist with analysis and involving in strategic plans to accomplish technical as well as business objectives Implement designed solutions prioritising compliance, security and cloud native development best practices Exercising independent judgment and discretion in overall project execution by prioritising , planning, help tracking project progress and reporting to project lead/stakeholders Continually strive to experiment on new age cloud offerings (specifically on GCP), help define the best practice for cloud, automation & Dev-Ops, be a technology expert across multiple channels Use the experience in multiple cloud platforms to implement GCP cloud migration, know and prioritize cloud agnostic tools and methodologies Abide by tracking mechanisms and ensure IT standards and methodology are met, deliver quality results Participate in technical reviews of requirements, designs, code/ scripts and other artifacts Qualifications we seek in you! Minimum Qualifications A technical graduate (BCA/MCA/ B.Tech / M.Tech ) with IT experience Proven experience on GCP as a Cloud and DevOps Engineering member and overall experience across cloud or Datacentre solutions Hands-on experience in Kubernetes, GKE and Anthos (Anthos configuration management, policy management, cluster management, service management ( ishtio ) etc.), proficient in Docker and containerization technologies, VmWare VM. Should have working experience with Dockerfile , YAML, Kubernetes Manifest, Helm Charts Proficient using Orchestration Platforms like Docker Compose, Kubernetes, GKE, Anthos, EKS etc. Strong experience in setting up and managing CI/CD pipelines (e.g., Jenkins, GitLab CI/CD, Github Actions, Azure DevOps, Argo CD, JFrog artifactory etc.), integrating source code quality and security tools Strong in Scripting, such as Bash/PowerShell/Groovy/Python, Linux. Knowledge of JavaScript, Golang, middleware technologies is a definite plus IaaC expertise desired, Terraform/ Terragrunt , Ansible preferred Hands-on expertise in operation and maintenance of public/private cloud infrastructure, desired to have experience in migration of application and services from on-prem to cloud, hybrid to cloud or cloud to cloud Experience in application Re-hosting, Re- platforming and Re-factoring, working closely with Development Teams where necessary Familiarity with cloud migration tools of Stratozone for Google cloud and Kubevirt is an added advantage Familiarity with APM tools is desired - New Relic, Splunk, Datadog, Cloud Monitoring etc. Familiarity with RDBMS and NoSQL database, GCP services such as Firestore , Cloud SQL, and Snowflake Should be result-oriented with broad IT network security experience to meet the challenging business needs in a global environment Implementation of appropriate access controls using Identity and Access Management (IAM) in GCP Network, Infrastructure and Security certifications on GCP (specifically Certified Kubernetes Application Developer (CKAD), GCP Professional Cloud Kubernetes Engine Specialist, GCP Professional Cloud Kubernetes Engine Architect, GCP Professional Cloud Anthos Architect) Competent in communication, analytical abilities, presentation and teamwork skills Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
0.0 years
0 Lacs
Hyderabad / Secunderabad, Telangana, Telangana, India
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Lead Consultant, G CP Senior Engineer! Have strong working experience on GCP and exposure to at least one public cloud platform of AWS/Azure Solid understanding of cloud services offered by GCP/AWS/Azure, specifically the infrastructure, security, IAM and application orchestration services along with managed services for building SaaS/PaaS applications Understands the Cloud Migration Framework, working knowledge with various tools, multi-cloud deployment, cloud native development, cloud migration patterns etc. Be able to participate in platform architecture & design Be able to independently analyse different cloud attributes and develop proofs of concept Very good knowledge of cloud migration patterns like Re-host, Re-platform, Re-factor and be able to apply the relevant patterns Good understanding of the networking aspects on cloud, defining network security groups, policies, configuring load balancers, high availability, API gateway, encryption at rest and transit Ability to learn/work with emerging technologies, methodologies, and solutions in the Cloud/IT technology space Ability to collaborate across organizational boundaries, build relationships, deliver results and work cross-functionally Responsibilities: This position will be responsible to work closely with Cloud Architects and IT, assist with analysis and involving in strategic plans to accomplish technical as well as business objectives Implement designed solutions prioritising compliance, security and cloud native development best practices Exercising independent judgment and discretion in overall project execution by prioritising , planning, help tracking project progress and reporting to project lead/stakeholders Continually strive to experiment on new age cloud offerings (specifically on GCP), help define the best practice for cloud, automation & Dev-Ops, be a technology expert across multiple channels Use the experience in multiple cloud platforms to implement GCP cloud migration, know and prioritize cloud agnostic tools and methodologies Abide by tracking mechanisms and ensure IT standards and methodology are met, deliver quality results Participate in technical reviews of requirements, designs, code/ scripts and other artifacts Qualifications we seek in you! Minimum Qualifications A technical graduate (BCA/MCA/ B.Tech / M.Tech ) with IT experience Proven experience on GCP as a Cloud and DevOps Engineering member and overall experience across cloud or Datacentre solutions Hands-on experience in Kubernetes, GKE and Anthos (Anthos configuration management, policy management, cluster management, service management ( ishtio ) etc.), proficient in Docker and containerization technologies, VmWare VM. Should have working experience with Dockerfile , YAML, Kubernetes Manifest, Helm Charts Proficient using Orchestration Platforms like Docker Compose, Kubernetes, GKE, Anthos, EKS etc. Strong experience in setting up and managing CI/CD pipelines (e.g., Jenkins, GitLab CI/CD, Github Actions, Azure DevOps, Argo CD, JFrog artifactory etc.), integrating source code quality and security tools Strong in Scripting, such as Bash/PowerShell/Groovy/Python, Linux. Knowledge of JavaScript, Golang, middleware technologies is a definite plus IaaC expertise desired, Terraform/ Terragrunt , Ansible preferred Hands-on expertise in operation and maintenance of public/private cloud infrastructure, desired to have experience in migration of application and services from on-prem to cloud, hybrid to cloud or cloud to cloud Experience in application Re-hosting, Re- platforming and Re-factoring, working closely with Development Teams where necessary Familiarity with cloud migration tools of Stratozone for Google cloud and Kubevirt is an added advantage Familiarity with APM tools is desired - New Relic, Splunk, Datadog, Cloud Monitoring etc. Familiarity with RDBMS and NoSQL database, GCP services such as Firestore , Cloud SQL, and Snowflake Should be result-oriented with broad IT network security experience to meet the challenging business needs in a global environment Implementation of appropriate access controls using Identity and Access Management (IAM) in GCP Network, Infrastructure and Security certifications on GCP (specifically Certified Kubernetes Application Developer (CKAD), GCP Professional Cloud Kubernetes Engine Specialist, GCP Professional Cloud Kubernetes Engine Architect, GCP Professional Cloud Anthos Architect) Competent in communication, analytical abilities, presentation and teamwork skills Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 1 week ago
6.0 - 9.0 years
18 - 20 Lacs
Pune
Work from Office
Notice Period: (Immediate Joiner - Only) Duration: 6 Months (Possible Extension) Shift Timing: 11:30 AM 9:30 PM IST About the Role We are looking for a highly skilled and experienced DevOps / Site Reliability Engineer to join on a contract basis. The ideal candidate will be hands-on with Kubernetes (preferably GKE), Infrastructure as Code (Terraform/Helm), and cloud-based deployment pipelines. This role demands deep system understanding, proactive monitoring, and infrastructure optimization skills. Key Responsibilities: Design and implement resilient deployment strategies (Blue-Green, Canary, GitOps). Configure and maintain observability tools (logs, metrics, traces, alerts). Optimize backend service performance through code and infra reviews (Node.js, Django, Go, Java). Tune and troubleshoot GKE workloads, HPA configs, ingress setups, and node pools. Build and manage Terraform modules for infrastructure (VPC, CloudSQL, Pub/Sub, Secrets). Lead or participate in incident response and root cause analysis using logs, traces, and dashboards. Reduce configuration drift and standardize secrets, tagging, and infra consistency across environments. Collaborate with engineering teams to enhance CI/CD pipelines and rollout practices. Required Skills & Experience: 5-10 years in DevOps, SRE, Platform, or Backend Infrastructure roles. Strong coding/scripting skills and ability to review production-grade backend code. Hands-on experience with Kubernetes in production, preferably on GKE. Proficient in Terraform, Helm, GitHub Actions, and GitOps tools (ArgoCD or Flux). Deep knowledge of Cloud architecture (IAM, VPCs, Workload Identity, CloudSQL, Secret Management). Systems thinking understands failure domains, cascading issues, timeout limits, and recovery strategies. Strong communication and documentation skills capable of driving improvements through PRs and design reviews. Tech Stack & Tools Cloud & Orchestration: GKE, Kubernetes IaC & CI/CD: Terraform, Helm, GitHub Actions, ArgoCD/Flux Monitoring & Alerting: Datadog, PagerDuty Databases & Networking: CloudSQL, Cloudflare Security & Access Control: Secret Management, IAM Driving Results: A good single contributor and a good team player. Flexible attitude towards work, as per the needs. Proactively identify & communicate issues and risks. Other Personal Characteristics: Dynamic, engaging, self-reliant developer. Ability to deal with ambiguity. Manage a collaborative and analytical approach. Self-confident and humble. Open to continuous learning Intelligent, rigorous thinker who can operate successfully amongst bright people
Posted 1 week ago
2.0 - 4.0 years
4 - 6 Lacs
Hyderabad
Work from Office
Key Responsibilities:Cloud Infrastructure Management:o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP).o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services.Kubernetes and Containerization:o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications. Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies.CI/CD Pipelines:o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD.o Automate deployment workflows for containerized and serverless applications.Security and Compliance:o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption. Conduct regular audits to ensure compliance with organizational and regulatory standards. Collaboration and Support:o Work closely with development teams to containerize applications and ensure smooth deployment on GCP.o Provide support for troubleshooting and resolving infrastructure-related issues.Cost Optimization:o Monitor and optimize GCP resource usage to ensure cost efficiency.o Implement strategies to reduce cloud spend without compromising performance.Required Skills and Qualifications:Certifications:o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification.Cloud Expertise:o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub.DevOps Tools:o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build.o Experience with containerization tools like Docker.Kubernetes Expertise:o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets.o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize.5. Programming and Scripting:o Strong scripting skills in Python, Bash, or Go.o Familiarity with YAML and JSON for configuration management.Monitoring and Logging:o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite.Networking:o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers. Soft Skills: Strong problem-solving and troubleshooting skills.o Excellent communication and collaboration abilities.o Ability to work in an agile, fast-paced environment.
Posted 1 week ago
5.0 - 10.0 years
10 - 20 Lacs
Gurugram
Hybrid
We are looking for a highly skilled Engineer with a solid experience of building Bigdata, GCP Cloud based marketing ODL applications. The Engineer will play a crucial role in designing, implementing, and optimizing data solutions to support our organizations data-driven initiatives. This role requires expertise in data engineering, strong problem-solving abilities, and a collaborative mindset to work effectively with various stakeholders. Technical Skills 1. Core Data Engineering Skills Proficiency in using GCPs big data tools like: BigQuery: For data warehousing and SQL analytics. Dataproc: For running Spark and Hadoop clusters. Expertise in building automated, scalable, and reliable pipelines using custom Python/Scala solutions or Cloud Data Functions . 2. Programming and Scripting Strong coding skills in SQL, and Java. Familiarity with APIs and SDKs for GCP services to build custom data solutions. 3. Cloud Infrastructure Understanding of GCP services such as Cloud Storage, Compute Engine, and Cloud Functions. Familiarity with Kubernetes (GKE) and containerization for deploying data pipelines. (Optional but Good to have) 4. DevOps and CI/CD Experience setting up CI/CD pipelines using Cloud Build, GitHub Actions, or other tools. Monitoring and logging tools like Cloud Monitoring and Cloud Logging for production workflows. Soft Skills 1. Innovation and Problem-Solving Ability to think creatively and design innovative solutions for complex data challenges. Experience in prototyping and experimenting with cutting-edge GCP tools or third-party integrations. Strong analytical mindset to transform raw data into actionable insights. 2. Collaboration Teamwork: Ability to collaborate effectively with data analysts, and business stakeholders. Communication: Strong verbal and written communication skills to explain technical concepts to non-technical audiences. 3. Adaptability and Continuous Learning Open to exploring new GCP features and rapidly adapting to changes in cloud technology.
Posted 1 week ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad
Hybrid
Key Responsibilities: 1. Cloud Infrastructure Management: o Design, deploy, and manage scalable and secure infrastructure on Google Cloud Platform (GCP). o Implement best practices for GCP IAM, VPCs, Cloud Storage, Clickhouse, Superset Apache tools onboarding and other GCP services. 2. Kubernetes and Containerization: o Manage and optimize Google Kubernetes Engine (GKE) clusters for containerized applications. o Implement Kubernetes best practices, including pod scaling, resource allocation, and security policies. 3. CI/CD Pipelines: o Build and maintain CI/CD pipelines using tools like Cloud Build, Stratus, GitLab CI/CD, or ArgoCD. o Automate deployment workflows for containerized and serverless applications. 4. Security and Compliance: o Ensure adherence to security best practices for GCP, including IAM policies, network security, and data encryption. o Conduct regular audits to ensure compliance with organizational and regulatory standards. 5. Collaboration and Support: o Work closely with development teams to containerize applications and ensure smooth deployment on GCP. o Provide support for troubleshooting and resolving infrastructure-related issues. 6. Cost Optimization: o Monitor and optimize GCP resource usage to ensure cost efficiency. o Implement strategies to reduce cloud spend without compromising performance. ________________________________________ Required Skills and Qualifications: 1. Certifications: o Must hold a Google Cloud Professional DevOps Engineer certification or Google Cloud Professional Cloud Architect certification. 2. Cloud Expertise: o Strong hands-on experience with Google Cloud Platform (GCP) services, including GKE, Cloud Functions, Cloud Storage, BigQuery, and Cloud Pub/Sub. 3. DevOps Tools: o Proficiency in DevOps tools like Terraform, Ansible, Stratus, GitLab CI/CD, or Cloud Build. o Experience with containerization tools like Docker. 4. Kubernetes Expertise: o In-depth knowledge of Kubernetes concepts such as pods, deployments, services, ingress, config maps, and secrets. o Familiarity with Kubernetes tools like kubectl, Helm, and Kustomize. 5. Programming and Scripting: o Strong scripting skills in Python, Bash, or Go. o Familiarity with YAML and JSON for configuration management. 6. Monitoring and Logging: o Experience with monitoring tools like Prometheus, Grafana, or Google Cloud Operations Suite. 7. Networking: o Understanding of cloud networking concepts, including VPCs, subnets, firewalls, and load balancers. 8. Soft Skills: o Strong problem-solving and troubleshooting skills. o Excellent communication and collaboration abilities. o Ability to work in an agile, fast-paced environment.
Posted 1 week ago
15.0 - 20.0 years
35 - 40 Lacs
Hyderabad, Ahmedabad
Hybrid
Role & responsibilities • Tech Blocks Website: https://tblocks.com/• Corporate Video: TechBlocks - YouTube LinkedIn: https://www.linkedin.com/company/techblocks/about/ Job Description: Summary : As DevOps Manager, you will be responsible for leading the DevOps function hands on in technology while managing people, process improvements, and automation strategy. You will set the vision for DevOps practices at Techblocks India and drive cross-team efficiency. Experience Required: 12+ years total experience, with 35 years in DevOps leadership roles. Technical Knowledge and Skills: Mandatory: Cloud: GCP (Complete stack from IAM to GKE) CI/CD: End-to-end pipeline ownership (GitHub Actions, Jenkins, Argo CD) IaC: Terraform, Helm • Containers: Docker, Kubernetes • DevSecOps: Vault, Trivy, OWASP Nice to Have: FinOps exposure for cost optimization Big Data tools familiarity (BigQuery, Dataflow) Familiarity with Kong, Anthos, Istio Scope: Lead DevOps team across multiple pods and products Define roadmap for automation, security, and CI/CD Ensure operational stability of deployment pipelines Roles and Responsibilities: Architect and guide implementation of enterprise-grade CI/CD pipelines that support multi-environment deployments, microservices architecture, and zero downtime delivery practices. Oversee Infrastructure-as-Code initiatives to establish consistent and compliant cloud provisioning using Terraform, Helm, and policy-as-code integrations. Champion DevSecOps practices by embedding security controls throughout the pipeline—ensuring image scanning, secrets encryption, policy checks, and runtime security enforcement Lead and manage a geographically distributed DevOps team, setting performance expectations, development plans, and engagement strategies. • Drive cross-functional collaboration with engineering, QA, product, and SRE teams to establish integrated DevOps governance practices. Develop a framework for release readiness, rollback automation, change control, and environment reconciliation processes. Monitor deployment health, release velocity, lead time to recovery, and infrastructure cost optimization through actionable DevOps metrics dashboards Serve as the primary point of contact for C-level stakeholders during major infrastructure changes, incident escalations, or audits. Own the budgeting and cost management strategy for DevOps tooling, cloud consumption, and external consulting partnerships. Identify, evaluate, and onboard emerging DevOps technologies, ensuring team readiness through structured onboarding, POCs, and knowledge sessions. Foster a culture of continuous learning, innovation, and ownership—driving internal tech talks, hackathons, and community engagement Preferred candidate profile
Posted 2 weeks ago
4.0 - 6.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 327258 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP & GKE - Sr Cloud Engineer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Title / Role: GCP & GKE - Sr Cloud Engineer Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 4+ yrs Total Experience: 4+ Years Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools. Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform . Working knowledge on GCE, GAE, GKE and GCS . Hands-on experience in creating and provisioning compute Instances using GCP console, Terraform and Google Cloud SDK. . Creating Databases in GCP and in VM's . Knowledge of data analyst tool (big query). . Knowledge of cost analysis and cost optimization. . Knowledge of Git & GitHub. . Knowledge on Terraform and Jenkins. . Monitoring the VM and Applications using Stack driver. . Working knowledge on VPN and Interconnect setup. . Hands on experience in setting up HA environment. . Hands on experience in Creating VM instances in Google cloud Platform. . Hands on experience in Cloud storage and retention policies in storage. . Managing Users on Google IAM Service and providing them appropriate permissions. . GKE . Install Tools - Set up Kubernetes tools . Administer a Cluster . Configure Pods and Containers . Perform common configuration tasks for Pods and containers. . Monitoring, Logging, and Debugging . Inject Data Into Applications . Specify configuration and other data for the Pods that run your workload. . Run Applications . Run and manage both stateless and stateful applications. . Run Jobs . Run Jobs using parallel processing. . Access Applications in a Cluster . Extend Kubernetes . Understand advanced ways to adapt your Kubernetes cluster to the needs of your work environment. . Manage Cluster Daemons . Perform common tasks for managing a DaemonSet, such as performing a rolling update. . Extend kubectl with plugins . Extend kubectl by creating and installing kubectl plugins. . Manage HugePages . Configure and manage huge pages as a schedulable resource in a cluster. . Schedule GPUs . Configure and schedule GPUs for use as a resource by nodes in a cluster. Certification: GCP Engineer & GKE Academic Qualification: B. Tech or equivalent or MCA Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery. ITIL certification is desired. Knowledge on quality Knowledge on security processes Soft Skills: Good communication skill and capability to work directly with global customers Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
5.0 - 10.0 years
6 - 12 Lacs
Kolkata
Work from Office
At Gintaa, were redefining how India orders food. With our focus on affordability, exclusive restaurant partnerships, and hyperlocal logistics, we aim to scale across India's Tier 1 and Tier 2 cities. Were backed by a mission-driven team and expanding rapidly now’s the time to join the core tech leadership and build something impactful from the ground up. Job Summary We are looking for an experienced and motivated DevOps Engineer with 5–7 years of hands-on experience designing, implementing, and managing cloud infrastructure—particularly on Google Cloud Platform (GCP) and Amazon Web Services (AWS). The ideal candidate will have deep expertise in infrastructure as code (IaC), CI/CD pipelines, container orchestration, and cloud-native technologies. This role requires strong analytical skills, attention to detail, and a passion for optimizing cloud infrastructure performance and cost across multi-cloud environments. Key Responsibilities Multi-Cloud Infrastructure: Design, implement, and maintain scalable, reliable, and secure cloud infrastructure using GCP services (Compute Engine, GKE, Cloud Functions, Pub/Sub, BigQuery, Cloud Storage) and AWS services (EC2, ECS/EKS, Lambda, S3, RDS, CloudFront). CI/CD & GitOps: Build and manage CI/CD pipelines using GitHub/GitLab Actions, artifact repositories, and enforce GitOps practices across both GCP and AWS environments. Containerization & Serverless: Leverage Docker, Kubernetes (GKE/EKS), and serverless architectures (Cloud Functions, AWS Lambda) to support microservices and modern application deployments. Infrastructure as Code: Develop and manage IaC using Terraform (or CloudFormation for AWS) to automate provisioning and drift-detection across clouds. Observability & Monitoring: Implement observability tools like Prometheus, Grafana, Google Cloud Monitoring, and AWS CloudWatch for real-time system insights. Security & Compliance: Ensure best practices in cloud security, including IAM policies (GCP IAM + AWS IAM), encryption standards (KMS), network security (VPCs, Security Groups, Firewalls), and compliance frameworks. Service Mesh: Integrate and manage service mesh architectures such as Istio or Linkerd for secure and observable microservices communication. Troubleshooting & DR: Troubleshoot and resolve infrastructure issues, ensure high availability, disaster recovery (GCP Backup + AWS Backup/AWS DR strategies), and performance optimization. Cost Management: Drive initiatives for cloud cost management; use tools like GCP Cost Management and AWS Cost Explorer to suggest optimization strategies. Documentation & Knowledge Transfer: Document technical architectures, processes, and procedures; ensure smooth knowledge transfer and operational readiness. Cross-Functional Collaboration: Collaborate with Development, QA, Security, and Architecture teams to streamline deployment workflows. Required Skills & Qualifications 5–7 years of DevOps/Cloud Engineering experience, with at least 3 years on GCP and 3 years on AWS. Proficiency in Terraform (plus familiarity with CloudFormation), Docker, Kubernetes (GKE/EKS), and other DevOps toolchains. Strong experience with CI/CD tools (GitHub/GitLab Actions) and artifact repositories. Deep understanding of cloud networking, VPCs, load balancing, security groups, firewalls, and VPNs in both GCP and AWS. Expertise in monitoring/logging frameworks such as Prometheus, Grafana, Stackdriver (Cloud Monitoring), and AWS CloudWatch/CloudTrail. Strong scripting skills in Python, Bash, or Go for automation tasks. Knowledge of data backup, high-availability systems, and disaster recovery strategies across multi-cloud. Familiarity with service mesh technologies and microservices-based architecture. Excellent analytical, troubleshooting, and documentation skills. Effective communication and ability to work in a fast-paced, collaborative environment. Preferred Qualifications (Good to Have) Google Professional Cloud Architect Certification and/or AWS Certified Solutions Architect – Professional. Experience with multi-cloud or hybrid cloud setups, including VPN/Direct Connect and Interconnect configurations. Exposure to agile software development, DevSecOps, and compliance-driven environments (e.g., BFSI, Healthcare). Understanding of cost modeling and cloud billing analysis tools. Why Join Gintaa? Be a part of a purpose-driven startup revolutionizing food and local commerce in India. Build impactful, large-scale mobile applications from scratch. Work with a visionary leadership team and dynamic, entrepreneurial culture. Competitive salary and leadership visibility.
Posted 2 weeks ago
15.0 - 18.0 years
45 - 50 Lacs
Noida, Mumbai, Pune
Work from Office
Skill & Experience 15-18 year experience on JAVA is must with hands-on experience of architecting solutions using cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud in Pre-sales capability Experience in monolith to microservices modernization engagements Should have worked on multiple engagements related with Application assessment as part of Re-factoring/Containerization and Re-architecting cloud journeys Should have been part of large digital transformation project Experience building, architecting, designing, and implementing highly distributed global cloud-based systems. Experience in network infrastructure, security, data, or application development. Experience with structured Enterprise Architecture practices, hybrid cloud deployments, and on premise-to-cloud migration deployments and roadmaps. Architecting microservice/API Ability to deliver results and work cross-functionally. Ability to engage/influence audiences and identify expansion engagements Certification in Google Professional Cloud Architect is desirable Experience with Agile/SCRUM environment. Familiar with Agile Team management tools (JIRA, Confluence) Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment,Courage) Working with Docker, Openshift, GKE and Cloud Run Designing database in Oracle/Cloud SQL/Cloud Spanner Designing software which has low operational cost and cloud billing Contributing to building best practices and defining reference architecture
Posted 2 weeks ago
15.0 - 18.0 years
45 - 50 Lacs
Noida, Mumbai, Pune
Work from Office
Skill & Experience 15-18 year experience on JAVA is must with hands-on experience of architecting solutions using cloud native PaaS services such as Databases, Messaging, Storage, Compute in Google Cloud in Pre-sales capability Experience in monolith to microservices modernization engagements Should have worked on multiple engagements related with Application assessment as part of Re-factoring/Containerization and Re-architecting cloud journeys Should have been part of large digital transformation project Experience building, architecting, designing, and implementing highly distributed global cloud-based systems. Experience in network infrastructure, security, data, or application development. Experience with structured Enterprise Architecture practices, hybrid cloud deployments, and on premise-to-cloud migration deployments and roadmaps. Architecting microservice/API Ability to deliver results and work cross-functionally. Ability to engage/influence audiences and identify expansion engagements Certification in Google Professional Cloud Architect is desirable Experience with Agile/SCRUM environment. Familiar with Agile Team management tools (JIRA, Confluence) Understand and promote Agile values: FROCC (Focus, Respect, Openness, Commitment,Courage) Working with Docker, Openshift, GKE and Cloud Run Designing database in Oracle/Cloud SQL/Cloud Spanner Designing software which has low operational cost and cloud billing Contributing to building best practices and defining reference architecture
Posted 2 weeks ago
1.0 - 3.0 years
3 - 6 Lacs
Chennai
Work from Office
What youll be doing You will be part of the Network Planning group in GNT organization supporting development of deployment automation pipelines and other tooling for the Verizon Cloud Platform. You will be supporting a highly reliable infrastructure running critical network functions. You will be responsible for solving issues that are new and unique, which will provide the opportunity to innovate. You will have a high level of technical expertise and daily hands-on implementation working in a planning team designing and developing automation. This entitles programming and orchestrating the deployment of feature sets into the Kubernetes CaaS platform along with building containers via a fully automated CI/CD pipeline utilizing Ansible playbooks, Python and CI/CD tools and process like JIRA, GitLab, ArgoCD, or any other scripting technologies. Leveraging monitoring tools such as Redfish, Splunk, and Grafana to monitor system health, detect issues, and proactively resolve them. Design and configure alerts to ensure timely responses to critical events. Working with the development and Operations teams to design, implement, and optimize CI/CD pipelines using ArgoCD for efficient, automated deployment of applications and infrastructure. Implementing security best practices for cloud and containerized services and ensure adherence to security protocols. Configure IAM roles, VPC security, encryption, and compliance policies. Continuously optimize cloud infrastructure for performance, scalability, and cost-effectiveness. Use tools and third-party solutions to analyze usage patterns and recommend cost-saving strategies. Working closely with the engineering and operations teams to design and implement cloud-based solutions. Maintaining detailed documentation of cloud architecture and platform configurations and regularly provide status reports and performance metrics. What were looking for... Youll need to have: Bachelors degree or one or more year of work experience. Experience years in Kubernetes administration Hands-on experience with one or more of the following platforms: EKS, Red Hat OpenShift, GKE, AKS, OCI GitOps CI/CD workflows (ArgoCD, Flux) and Very Strong Expertise in the following: Ansible, Terraform, Helm, Jenkins, Gitlab VSC/Pipelines/Runners, Artifactory Strong proficiency with monitoring/observability tools such as New Relic, Prometheus/Grafana, logging solutions (Fluentd/Elastic/Splunk) to include creating/customizing metrics and/or logging dashboards Backend development experience with languages to include Golang (preferred), Spring Boot, and Python Development Experience with the Operator SDK, HTTP/RESTful APIs, Microservices Familiarity with Cloud cost optimization (e.g. Kubecost) Strong experience with infra components like Flux, cert-manager, Karpenter, Cluster Autoscaler, VPC CNI, Over-provisioning, CoreDNS, metrics-server Familiarity with Wireshark, tshark, dumpcap, etc., capturing network traces and performing packet analysis Demonstrated expertise with the K8S ecosystem (inspecting cluster resources, determining cluster health, identifying potential application issues, etc.) Strong Development of K8S tools/components which may include standalone utilities/plugins, cert-manager plugins, etc. Development and working experience with Service Mesh lifecycle management and configuring, troubleshooting applications deployed on Service Mesh and Service Mesh related issues Expertise in RBAC and Pod Security Standards, Quotas, LimitRanges, OPA Gatekeeper Policies Working experience with security tools such as Sysdig, Crowdstrike, Black Duck, etc. Demonstrated expertise with the K8S security ecosystem (SCC, network policies, RBAC, CVE remediation, CIS benchmarks/hardening, etc.) Networking of microservices, solid understanding of Kubernetes networking and troubleshooting Certified Kubernetes Administrator (CKA) Demonstrated very strong troubleshooting and problem-solving skills Excellent verbal communication and written skills Even better if you have one or more of the following: Certified Kubernetes Application Developer (CKAD) Red Hat Certified OpenShift Administrator Familiarity with creating custom EnvoyFilters for Istio service mesh and integrating with existing web application portals Experience with OWASP rules and mitigating security vulnerabilities using security tools like Fortify, Sonarqube, etc. Database experience (RDBMS, NoSQL, etc.)
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 326833 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GKE to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Title / Role: GCP & GKE Staff Engineer NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 8+ yrs Total Experience: 8+ Years Must have GCP Solution Architect Certification & GKE Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment , business case creation , design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine , Compute Engine Managed Instance Groups , Kubernetes Cloud Storage , Cloud Storage for Firebase , Persistant Disk , Local SSD , Filestore , Transfer Service Virtual Private Network (VPC), Cloud DNS , Cloud Interconnect , Cloud VPN Gateway , Network Load Balancing , Global load balancing , Firewall rules , Cloud Armor Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE . Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. . Manage Kubernetes Objects . Declarative and imperative paradigms for interacting with the Kubernetes API. . Managing Secrets . Managing confidential settings data using Secrets. . Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. . Configure networking for your cluster. . Hands-on experience with terraform. Ability to write reusable terraform modules. . Hands-on Python and Unix shell scripting is required. . understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. . Experience with GCP Services and writing cloud functions. . Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. . Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus . Experience using Docker within container orchestration platforms such as GKE. . Knowledge of setting up splunk . Knowledge of Spark in GKE Certification: GCP solution architect & GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes Soft Skills: Excellent communication skill and capability to work directly with global customers Strong technical leadership skill to drive solutions Focused on quality/cost/time of deliverables Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation Good reporting skill Willing to work in different time zones as per project requirement Good attitude to work in team and as individual contributor based on the project and situation Focused, result oriented and self-motivating About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
8.0 - 10.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 327246 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP & GKE Staff Engineer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Title / Role: GCP & GKE Staff Engineer NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Description: Primary Skill: Professional Cloud Security Engineer & Cloud-Infrastructure-Google Cloud Platform Related experience: 5+ years of experience in cloud security engineering and automation Total Experience: 8+ Years Must have GCP Solution Architect Certification & Professional Cloud Security Engineer Mandatory Skills: Technical Qualification/ Knowledge: This role supports operational security, control configuration, and secure design practices for GCP workloads. Roles & Responsibilities Implement GCP security controls: IAM, VPC security, VPNs, KMS, Cloud Armor, and secure networking. Manage GCP identity and access, including SSO, MFA, and federated IDP configurations. Monitor workloads using Cloud Operations Suite and escalate anomalies. Conduct basic threat modelling, vulnerability scanning, and patching processes. Automate security audits and compliance controls using Terraform and Cloud Shell scripting. Assist architects in deploying and maintaining secure-by-default infrastructure. Support audit preparation, policy enforcement, and evidence gathering. Collaborate with cross-functional teams to resolve security alerts and Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security, DR/business continuity strategy, migration, templates, cost optimization, PowerShell, Ansible etc.. Should have prior experience in executing large complex cloud transformation programs including discovery, assessment, business case creation, design, build, migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine, Compute Engine Managed Instance Groups, Kubernetes Cloud Storage, Cloud Storage for Firebase, Persistant Disk, Local SSD, Filestore, Transfer Service Virtual Private Network (VPC), Cloud DNS, Cloud Interconnect, Cloud VPN Gateway, Network Load Balancing, Global load balancing, Firewall rules, Cloud Armor Cloud IAM, Resource Manager, Multi-factor Authentication, Cloud KMS Cloud Billing, Cloud Console, Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE), Anthos Service Mesh, Cloud Functions, PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE .Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. .Manage Kubernetes Objects .Declarative and imperative paradigms for interacting with the Kubernetes API. .Managing Secrets .Managing confidential settings data using Secrets. .Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. . Configure networking for your cluster. . Hands-on experience with terraform. Ability to write reusable terraform modules. . Hands-on Python and Unix shell scripting is required. . understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. . Experience with GCP Services and writing cloud functions. . Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. . Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus . Experience using Docker within container orchestration platforms such as GKE. . Knowledge of setting up splunk . Knowledge of Spark in GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes Soft Skills: Excellent communication skill and capability to work directly with global customers Strong technical leadership skill to drive solutions Focused on quality/cost/time of deliverables Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation Good reporting skill Willing to work in different time zones as per project requirement Good attitude to work in team and as individual contributor based on the project and situation Focused, result oriented and self-motivating About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Req ID: 326830 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Security Architect to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Title / Role: GCP & GKE Staff Engineer NTT DATA Services strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Digital Engineering Lead Engineer to join our team in Bangalore, Karn?taka (IN-KA), India (IN). Job Description: Primary Skill: Professional Cloud Security Engineer & Cloud-Infrastructure-Google Cloud Platform Related experience: 5+ years of experience in cloud security engineering and automation Total Experience: 8+ Years Must have GCP Solution Architect Certification & Professional Cloud Security Engineer Mandatory Skills: Technical Qualification/ Knowledge: This role supports operational security, control configuration, and secure design practices for GCP workloads. Roles & Responsibilities Implement GCP security controls: IAM, VPC security, VPNs, KMS, Cloud Armor, and secure networking. Manage GCP identity and access, including SSO, MFA, and federated IDP configurations. Monitor workloads using Cloud Operations Suite and escalate anomalies. Conduct basic threat modelling, vulnerability scanning, and patching processes. Automate security audits and compliance controls using Terraform and Cloud Shell scripting. Assist architects in deploying and maintaining secure-by-default infrastructure. Support audit preparation, policy enforcement, and evidence gathering. Collaborate with cross-functional teams to resolve security alerts and Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell , Ansible etc.. Should have prior experience in executing large complex cloud transformation programs including discovery, assessment , business case creation , design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine , Compute Engine Managed Instance Groups , Kubernetes Cloud Storage , Cloud Storage for Firebase , Persistant Disk , Local SSD , Filestore , Transfer Service Virtual Private Network (VPC), Cloud DNS , Cloud Interconnect , Cloud VPN Gateway , Network Load Balancing , Global load balancing , Firewall rules , Cloud Armor Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE . Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. . Manage Kubernetes Objects . Declarative and imperative paradigms for interacting with the Kubernetes API. . Managing Secrets . Managing confidential settings data using Secrets. . Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. . Configure networking for your cluster. . Hands-on experience with terraform. Ability to write reusable terraform modules. . Hands-on Python and Unix shell scripting is required. . understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. . Experience with GCP Services and writing cloud functions. . Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. . Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus . Experience using Docker within container orchestration platforms such as GKE. . Knowledge of setting up splunk . Knowledge of Spark in GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes Soft Skills: Excellent communication skill and capability to work directly with global customers Strong technical leadership skill to drive solutions Focused on quality/cost/time of deliverables Timely and accurate communication Need to demonstrate the ownership for the technical issues and engage the right stakeholders for timely resolution. Flexibility to learn and lead other technology areas like other public cloud technologies, private cloud, automation Good reporting skill Willing to work in different time zones as per project requirement Good attitude to work in team and as individual contributor based on the project and situation Focused, result oriented and self-motivating About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at NTT DATA endeavors to make accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click . If you'd like more information on your EEO rights under the law, please click . For Pay Transparency information, please click.
Posted 2 weeks ago
8.0 - 10.0 years
27 - 32 Lacs
Chennai
Work from Office
Job Summary This position provides input, support, and performs full systems life cycle management activities (e.g., analyses, technical requirements, design, coding, testing, implementation of systems and applications software, etc.). He/She participates in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. This position provides input to applications development project plans and integrations. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. This position provides knowledge and support for applications development, integration, and maintenance. He/She provides input to department and project teams on decisions supporting projects. Responsibilities: Performs systems analysis and design. Designs and develops moderate to highly complex applications. Develops application documentation. Produces integration builds. Performs maintenance and support. Supports emerging technologies and products. Qualifications: Bachelors Degree or International equivalent Bachelor's Degree or International equivalent in Computer Science, Information Systems, Mathematics, Statistics, or related field - Preferred Primary Skills: Java Skill Requirements IDE: Eclipse Languages and Framework: Java, Springboot, Web Services, GKE Log: Log4j Version Control: Azure DevOps Dev Ops: Azure Others: WinSCP Interfaces: SQL Server, DB2, SMTP, Spanner Additional Information for Internal Candidates This role will be in-office 3 days a week in Chennai, India The last day to apply is February 25th, 2024
Posted 3 weeks ago
1.0 - 3.0 years
3 - 7 Lacs
Mumbai
Work from Office
about the role Cloud Engineers with experience in managing, planning, architecting, monitoring, and automating large scale deployments to Public Cloud.you will be part of a team of talented engineers to solve some of the most complex and exciting challenges faced in IT Automation and Hybrid Cloud Deployments. key responsibilities Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML technologies Design, deploy and maintain Cloud infrastructure for Clients Domestic & International Develop tools and automation to make platform operations more efficient, reliable and reproducible Create Container Orchestration (Kubernetes, Docker), strive for full automated solutions, ensure the up-time and security of all cloud platform systems and infrastructure Stay up to date on relevant technologies, plug into user groups, and ensure our client are using the best techniques and tools Providing business, application, and technology consulting in feasibility discussions with technology team members, customers and business partners Take initiatives to lead, drive and solve during challenging scenarios preferred qualifications 1-3 years experience in Cloud Infrastructure and Operations domains Experience with Linux systems and/OR Windows servers Specialize in one or two cloud deployment platforms: AWS, GCP, Azure Hands on experience with AWS services (EKS, ECS, EC2, VPC, RDS, Lambda, GKE, Compute Engine) Experience with one or more programming languages (Python, JavaScript, Ruby, Java, .Net) Good understanding of Apache Web Server, Nginx, MySQL, MongoDB, Nagios Logging and Monitoring tools (ELK, Stackdriver, CloudWatch) DevOps Technologies Knowledge on Configuration Management tools such as Ansible, Terraform, Puppet, Chef Experience working with deployment and orchestration technologies (such as Docker, Kubernetes, Mesos) Deep experience in customer facing roles with a proven track record of effective verbal and written communications Dependable and good team player Desire to learn and work with new technologies Automation in your blood
Posted 3 weeks ago
3.0 - 7.0 years
15 - 20 Lacs
Pune
Work from Office
About the job Sarvaha would like to welcome Kafka Platform Engineer (or a seasoned backend engineer aspiring to move into platform architecture) with a minimum of 4 years of solid experience in building, deploying, and managing Kafka infrastructure on Kubernetes platforms. Sarvaha is a niche software development company that works with some of the best funded startups and established companies across the globe. Please visit our website at What Youll Do - Deploy and manage scalable Kafka clusters on Kubernetes using Strimzi, Helm, Terraform, and StatefulSets - Tune Kafka for performance, reliability, and cost-efficiency - Implement Kafka security: TLS, SASL, ACLs, Kubernetes Secrets, and RBAC - Automate deployments across AWS, GCP, or Azure - Set up monitoring and alerting with Prometheus, Grafana, JMX Exporter - Integrate Kafka ecosystem components: Connect, Streams, Schema Registry - Define autoscaling, resource limits, and network policies for Kubernetes workloads - Maintain CI/CD pipelines (ArgoCD, Jenkins) and container workflows You Bring - BE/BTech/MTech (CS/IT or MCA), with an emphasis in Software Engineering - Strong foundation in the Apache Kafka ecosystem and internals (brokers, ZooKeeper/KRaft, partitions, storage) - Proficient in Kafka setup, tuning, scaling, and topic/partition management - Skilled in managing Kafka on Kubernetes using Strimzi, Helm, Terraform - Experience with CI/CD, containerization, and GitOps workflows - Monitoring expertise using Prometheus, Grafana, JMX - Experience on EKS, GKE, or AKS preferred - Strong troubleshooting and incident response mindset - High sense of ownership and automation-first thinking - Excellent collaboration with SREs, developers, and platform teams - Clear communicator, documentation-driven, and eager to mentor/share knowledge
Posted 3 weeks ago
2 - 4 years
5 - 8 Lacs
Pune
Work from Office
We are seeking a talented and motivated AI Engineers to join our dynamic team and contribute to the development of next-generation AI/GenAI based products and solutions. This role will provide you with the opportunity to work on cutting-edge SaaS technologies and impactful projects that are used by enterprises and users worldwide. As a Senior Software Engineer, you will be involved in the design, development, testing, deployment, and maintenance of software solutions. You will work in a collaborative environment, contributing to the technical foundation behind our flagship products and services. Responsibilities: Software Development: Write clean, maintainable, and efficient code or various software applications and systems. GenAI Product Development: Participate in the entire AI development lifecycle, including data collection, preprocessing, model training, evaluation, and deployment.Assist in researching and experimenting with state-of-the-art generative AI techniques to improve model performance and capabilities. Design and Architecture: Participate in design reviews with peers and stakeholders Code Review: Review code developed by other developers, providing feedback adhering to industry standard best practices like coding guidelines Testing: Build testable software, define tests, participate in the testing process, automate tests using tools (e.g., Junit, Selenium) and Design Patterns leveraging the test automation pyramid as the guide. Debugging and Troubleshooting: Triage defects or customer reported issues, debug and resolve in a timely and efficient manner. Service Health and Quality: Contribute to health and quality of services and incidents, promptly identifying and escalating issues. Collaborate with the team in utilizing service health indicators and telemetry for action. Assist in conducting root cause analysis and implementing measures to prevent future recurrences. Dev Ops Model: Understanding of working in a DevOps Model. Begin to take ownership of working with product management on requirements to design, develop, test, deploy and maintain the software in production. Documentation: Properly document new features, enhancements or fixes to the product, and also contribute to training materials. Basic Qualifications: Bachelors degree in Computer Science, Engineering, or a related technical field, or equivalent practical experience. 2+ years of professional software development experience. Proficiency as a developer using Python, FastAPI, PyTest, Celery and other Python frameworks. Experience with software development practices and design patterns. Familiarity with version control systems like Git GitHub and bug/work tracking systems like JIRA. Basic understanding of cloud technologies and DevOps principles. Strong analytical and problem-solving skills, with a proven track record of building and shipping successful software products and services. Preferred Qualifications: Experience with object-oriented programming, concurrency, design patterns, and REST APIs. Experience with CI/CD tooling such as Terraform and GitHub Actions. High level familiarity with AI/ML, GenAI, and MLOps concepts. Familiarity with frameworks like LangChain and LangGraph. Experience with SQL and NoSQL databases such as MongoDB, MSSQL, or Postgres. Experience with testing tools such as PyTest, PyMock, xUnit, mocking frameworks, etc. Experience with GCP technologies such as VertexAI, BigQuery, GKE, GCS, DataFlow, and Kubeflow. Experience with Docker and Kubernetes. Experience with Java and Scala a plus.
Posted 2 months ago
8 - 13 years
25 - 40 Lacs
Bengaluru
Remote
Senior GCP Cloud Administrator Experience: 8 - 12 Years Exp Salary : Competitive Preferred Notice Period : Within 30 Days Shift : 10:00AM to 7:00PM IST Opportunity Type: Remote Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : GCP, Identity and Access Management (IAM), BigQuery, SRE, GKE, GCP certification Good to have skills : Terraform, Cloud Composer, Dataproc, Dataflow, AWS Forbes Advisor (One of Uplers' Clients) is Looking for: Senior GCP Cloud Administrator who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description Senior GCP Cloud Administrator Forbes Advisor is a new initiative for consumers under the Forbes Marketplace umbrella that provides journalist- and expert-written insights, news and reviews on all things personal finance, health, business, and everyday life decisions. We do this by providing consumers with the knowledge and research they need to make informed decisions they can feel confident in, so they can get back to doing the things they care about most. We are looking for an experienced GCP Administrator to join our team. The ideal candidate will have strong hands-on experience with IAM Administration, multi-account management, Big Query administration, performance optimization, monitoring and cost management within Google Cloud Platform (GCP). Responsibilities: Manages and configures roles/permissions in GCP IAM by following the principle of least privileged access Manages Big Query service by way of optimizing slot assignments and SQL Queries, adopting FinOps practices for cost control, troubleshooting and resolution of critical data queries, etc. Collaborate with teams like Data Engineering, Data Warehousing, Cloud Platform Engineering, SRE, etc. for efficient Data management and operational practices in GCP Create automations and monitoring mechanisms for GCP Data-related services, processes and tasks Work with development teams to design the GCP-specific cloud architecture Provisioning and de-provisioning GCP accounts and resources for internal projects. Managing, and operating multiple GCP subscriptions Keep technical documentation up to date Proactively being up to date on GCP announcements, services and developments. Requirements: Must have 5+ years of work experience on provisioning, operating, and maintaining systems in GCP Must have a valid certification of either GCP Associate Cloud Engineer or GCP Professional Cloud Architect. Must have hands-on experience on GCP services such as Identity and Access Management (IAM), BigQuery, Google Kubernetes Engine (GKE), etc. Must be capable to provide support and guidance on GCP operations and services depending upon enterprise needs Must have a working knowledge of docker containers and Kubernetes. Must have strong communication skills and the ability to work both independently and in a collaborative environment. Fast learner, Achiever, sets high personal goals Must be able to work on multiple projects and consistently meet project deadlines Must be willing to work on shift-basis based on project requirements. Good to Have: Experience in Terraform Automation over GCP Infrastructure provisioning Experience in Cloud Composer, Dataproc, Dataflow Storage and Monitoring services Experience in building and supporting any form of data pipeline. Multi-Cloud experience with AWS. New-Relic monitoring. Perks: Day off on the 3rd Friday of every month (one long weekend each month) Monthly Wellness Reimbursement Program to promote health well-being Paid paternity and maternity leaves How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Forbes Advisor is a global platform dedicated to helping consumers make the best financial choices for their individual lives. We support your pursuit of success by making smart financial decisions simple, to help you get back to doing the things you care about most. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!
Posted 2 months ago
4 - 6 years
12 - 20 Lacs
Chennai
Hybrid
About the Role We are hiring a DevOps / Production Support Engineer to take full ownership of the production infrastructure. We're looking for a technically sharp and strategically minded engineer who can quickly understand the existing functions. Responsibilities Design and manage production infrastructure across Vercel, AWS, and Kinde. Build and maintain CI/CD pipelines to enable automated, zero-downtime production deployments. Review and replicate UAT setups to create a robust and resilient production environment. Implement best practices for infrastructure security, secrets management, and access control. Set up monitoring, alerting, and logging to ensure platform reliability and performance. Manage and back up MySQL/PostgreSQL databases with clear recovery procedures. Own incident response processes, including triage, root cause analysis, and post-incident automation. Must-Have Skills & Experience CI/CD Pipelines: Experience with GitHub Actions, GitLab CI/CD, or equivalent. AWS: Hands-on with services like EC2, Lambda, ECS, RDS (MySQL/PostgreSQL), IAM, and networking. Vercel: Strong understanding of deploying modern frontends (e.g., React/Next.js) on Vercel. Frontend CI/CD Lifecycle: Experience building and automating the deployment of frontend apps. Authentication: Knowledge of Kinde, Auth0, or similar OAuth-based identity providers. Database Ops: Experience managing and backing up MySQL or PostgreSQL in production environments. Infrastructure-as-Code (IaC): Proven experience using Terraform, AWS CDK, or CloudFormation. Secrets Management: Familiarity with AWS Secrets Manager, SSM Parameter Store, or HashiCorp Vault. Automation: Proficiency in Bash, Python, or Node.js for scripting and automation. Monitoring/Observability: Comfortable setting up tools like CloudWatch, Datadog, or Prometheus/Grafana. Who You Are Honest: You own up to mistakes, communicate transparently, and value integrity above all. Humble: You work with others without ego, respect different viewpoints, and always stay curious. Hungry: You're self-driven, eager to learn new systems, and motivated to deliver the right solutions-not just Collaborative: You can work across vendors, product teams, and internal stakeholders without friction. Detail-Oriented: You don't leave loose ends- you make sure things are done properly the first time. Reliable: You follow through on commitments and take pride in your craftsmanship. Preferred Qualifications DevSecOps approach to security, automation, and compliance. Prior experience in fast-paced or startup-like environments. What You'll Love Full ownership of a modern production infrastructure. Opportunity to shape scalable and secure DevOps practices from the ground up. Work on a high-impact product in the sports tech space. Collaborative and innovative culture focused on delivery and quality.
Posted 2 months ago
14 - 20 years
40 - 50 Lacs
Bengaluru
Work from Office
Role Description Join the AI team at Salesforce and make a real impact with your software designs and code! This position requires technical skills, outstanding analytical and influencing skills, and extraordinary business insight. It is a multi-functional role that requires building alignment and communication with several engineering organisations. We work in a highly collaborative environment, and you will partner with a highly cross functional team comprised of Data Scientists, Software Engineers, Machine learning engineers, UX experts, and product managers to build upon Agentforce, our innovative new AI framework. We value execution, clear communication, feedback and making learning fun. Your impact - You will: Architect, design, implement, test and deliver highly scalable AI solutions: Agents, AI Copilots/assistants, Chatbots, AI Planners, RAG solutions. Be accountable for defining and driving software architecture and enterprise capabilities (scalability, fault tolerance, extensibility, maintainability, etc.) Independently design sophisticated software systems for high-end solutions, while working in a consultative fashion with other senior engineers and architects in AI Cloud and across the company Determine overall architectural principles, frameworks, and standards to craft vision and roadmaps Analyze and provide feedback on product strategy and technical feasibility Drive long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Actively communicate with, encourage and motivate all levels of staff. Be a domain expert for multiple products, while writing code and working closely with other developers, PM, and UX to ensure features are delivered to meet business and quality requirements Troubleshoot complex production issues and work with support and customers as needed Drives long-term design strategies that span multiple sophisticated projects, deliver technical reports and performance presentations to customers and at industry events Required Skills: 14+ years of experience in building highly scalable Software-as-a-Service applications/ platform Experience building technical architectures that address complex performance issues Thrive in dynamic environments, working on cutting edge projects that often come with ambiguity. Innovation/startup mindset to be able to adapt Deep knowledge of object oriented programming and experience with at least one object oriented programming language, preferably Java Proven ability to mentor team members to support their understanding and growth of software engineering architecture concepts and aid in their technical development High proficiency in at least one high-level programming language and web framework (NodeJS, Express, Hapi, etc.) Proven understanding of web technologies, such as JavaScript, CSS, HTML5, XML, JavaScript, JSON, and/or Ajax Data model design, database technologies (RDBMS & NoSQL), and languages such as SQL and PL/SQL Experience delivering or partnering with teams that ship AI products at high scale. Experience in automated testing including unit and functional testing using Java, JUnit, JSUnit, Selenium Demonstrated ability to drive long-term design strategies that span multiple complex projects Experience delivering technical reports and presentations to customers and at industry events Demonstrated track record of cultivating strong working relationships and driving collaboration across multiple technical and business teams to resolve critical issues Experience with the full software lifecycle in highly agile and ambiguous environments Excellent interpersonal and communication skills. Preferred Skills: Solid experience in API development, API lifecycle management and/or client SDKs development Experience with machine learning or cloud technology platforms like AWS sagemaker, terraform, spinnaker, EKS, GKE Experience with AI/ML and Data science, including Predictive and Generative AI Experience with data engineering, data pipelines or distributed systems Experience with continuous integration (CI) and continuous deployment (CD), and service ownership Familiarity with Salesforce APIs and technologies Ability to support/resolve production customer escalations with excellent debugging and problem solving skills
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2