Jobs
Interviews

33 Cloud Run Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

You are a highly skilled and experienced Cloud Architect/Engineer with deep expertise in Google Cloud Platform (GCP). Your primary responsibility is to design, build, and manage scalable and reliable cloud infrastructure on GCP. You will leverage various GCP services such as Compute Engine, Cloud Run, BigQuery, Pub/Sub, Cloud Functions, Dataflow, Dataproc, IAM, and Cloud Storage to ensure high-performance cloud solutions. Your role also includes developing and maintaining CI/CD pipelines, automating infrastructure deployment using Infrastructure as Code (IaC) principles, and implementing best practices in cloud security, monitoring, performance tuning, and logging. Collaboration with cross-functional teams to deliver cloud solutions aligned with business objectives is essential. You should have 5+ years of hands-on experience in cloud architecture and engineering, with at least 3 years of practical experience on Google Cloud Platform (GCP). In-depth expertise in GCP services mentioned above is required. Strong understanding of networking, security, containerization (Docker, Kubernetes), and CI/CD pipelines is essential. Experience with monitoring, performance tuning, and logging in cloud environments is preferred. Familiarity with DevSecOps practices and tools such as HashiCorp Vault is a plus. Your role as a GCP Cloud Architect/Engineer will contribute to ensuring system reliability, backup, and disaster recovery strategies. This hybrid role is based out of Pune and requires a total of 10 to 13 years of relevant experience.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

About GlobalLogic GlobalLogic, a leader in digital product engineering with over 30,000 employees, helps brands worldwide in designing and developing innovative products, platforms, and digital experiences. By integrating experience design, complex engineering, and data expertise, GlobalLogic assists clients in envisioning possibilities and accelerating their transition into the digital businesses of tomorrow. Operating design studios and engineering centers globally, GlobalLogic extends its deep expertise to customers in various industries such as communications, financial services, automotive, healthcare, technology, media, manufacturing, and semiconductor. GlobalLogic is a Hitachi Group Company. Requirements Leadership & Strategy As a part of GlobalLogic, you will lead and mentor a team of cloud engineers, providing technical guidance and support for career development. You will define cloud architecture standards and best practices across the organization and collaborate with senior leadership to develop a cloud strategy and roadmap aligned with business objectives. Your responsibilities will include driving technical decision-making for complex cloud infrastructure projects and establishing and maintaining cloud governance frameworks and operational procedures. Leadership Experience With a minimum of 3 years in technical leadership roles managing engineering teams, you should have a proven track record of successfully delivering large-scale cloud transformation projects. Experience in budget management, resource planning, and strong presentation and communication skills for executive-level reporting are essential for this role. Certifications (Preferred) Preferred certifications include Google Cloud Professional Cloud Architect, Google Cloud Professional Data Engineer, and additional relevant cloud or security certifications. Technical Excellence You should have over 10 years of experience in designing and implementing enterprise-scale Cloud Solutions using GCP services. As a technical expert, you will architect and oversee the development of sophisticated cloud solutions using Python and advanced GCP services. Your role will involve leading the design and deployment of solutions utilizing Cloud Functions, Docker containers, Dataflow, and other GCP services. Additionally, you will design complex integrations with multiple data sources and systems, implement security best practices, and troubleshoot and resolve technical issues while establishing preventive measures. Job Responsibilities Technical Skills Your expertise should include expert-level proficiency in Python and experience in additional languages such as Java, Go, or Scala. Deep knowledge of GCP services like Dataflow, Compute Engine, BigQuery, Cloud Functions, and others is required. Advanced knowledge of Docker, Kubernetes, and container orchestration patterns, along with experience in cloud security, infrastructure as code, and CI/CD practices, will be crucial for this role. Cross-functional Collaboration Collaborating with C-level executives, senior architects, and product leadership to translate business requirements into technical solutions, leading cross-functional project teams, presenting technical recommendations to executive leadership, and establishing relationships with GCP technical account managers are key aspects of this role. What We Offer At GlobalLogic, we prioritize a culture of caring, continuous learning and development, interesting and meaningful work, balance and flexibility, and a high-trust organization. Join us to experience an inclusive culture, opportunities for growth and advancement, impactful projects, work-life balance, and a safe, reliable, and ethical global company. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner known for creating innovative digital products and experiences since 2000. Collaborating with forward-thinking companies globally, GlobalLogic continues to transform businesses and redefine industries through intelligent products, platforms, and services.,

Posted 1 week ago

Apply

3.0 - 10.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Google Cloud Architect in Pune (Hybrid) with over 10 years of experience, including 3+ years specifically on GCP, you will play a crucial role in leading the design and delivery of comprehensive cloud solutions on Google Cloud Platform. Your responsibilities will involve collaborating with data engineering, DevOps, and architecture teams to create scalable, secure, and cost-effective cloud platforms. Your key responsibilities will include designing scalable data and application architectures utilizing tools such as BigQuery, Dataflow, Composer, Cloud Run, Pub/Sub, and other related GCP services. You will be leading cloud migration, modernization, and CI/CD automation through the use of technologies like Terraform, Jenkins, GitHub, and Cloud Build. Additionally, you will be responsible for implementing real-time and batch data pipelines, chatbot applications using LLMs (Gemini, Claude), and automating reconciliation and monitoring processes. Your role will also involve collaborating closely with stakeholders to ensure technical solutions align with business objectives. The ideal candidate for this role should have a minimum of 3 years of experience working with GCP and possess a strong proficiency in key tools such as BigQuery, Dataflow, Cloud Run, Airflow, GKE, and Cloud Functions. Hands-on experience with Terraform, Kubernetes, Jenkins, GitHub, and cloud-native CI/CD is essential. In addition, you should have a solid understanding of DevSecOps practices, networking, and data architecture concepts like Data Lake, Lakehouse, and Mesh. Proficiency in Python, SQL, and ETL frameworks such as Ab Initio is also required. Preferred qualifications for this role include GCP Certifications (Cloud Architect, DevOps, ML Engineer), experience with Azure or hybrid environments, and domain expertise in sectors like Banking, Telecom, or Retail.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

delhi

On-site

Apply Digital is a global experience transformation partner, driving AI-powered change and measurable impact across complex, multi-brand ecosystems. With expertise spanning the customer experience lifecycle from strategy, design to engineering and beyond, Apply Digital enables clients to modernize organizations and maximize value for business and customers. The team of 750+ members has successfully transformed global companies such as Kraft Heinz, NFL, Moderna, Lululemon, Dropbox, Atlassian, A+E Networks, and The Very Group. Apply Digital, founded in 2016 in Vancouver, Canada, has expanded to ten cities across North America, South America, the UK, Europe, and India. At Apply Digital, the One Team approach is embraced, operating within a pod structure where senior leadership, subject matter experts, and cross-functional skill sets collaborate within a common tech and delivery framework. This structure is supported by scrum and sprint cadences, facilitating frequent releases and retrospectives to ensure progress towards desired outcomes. Apply Digital strives to create a safe, empowered, respectful, and fun community for its global workforce, embodying SHAPE values - smart, humble, active, positive, and excellent. Apply Digital is seeking a Technology Director to provide leadership and technical guidance to engineering teams. The role involves acting as a key technical stakeholder, ensuring quality delivery, fostering a strong engineering culture, defining architectural direction, supporting project execution, and enabling teams to deliver innovative solutions at scale. The Technology Director will oversee multiple development squads, collaborate with product, business, and account leadership, drive technical excellence and execution, ensure alignment with business goals, and contribute to technical strategy while fostering a collaborative and solution-oriented environment. The preferred candidate for the role should have 7+ years of software engineering experience, including 3+ years of leadership experience managing and mentoring engineering delivery teams. Strong expertise in modern web, cloud, and platform technologies, agile development environments, business-oriented technical strategies, cross-functional collaboration, and stakeholder management is required. Experience with Composable principles and related architectural approaches is beneficial. Excellent communication skills and proficiency in English are essential, as the role involves working with remote teams across North America and Latin America. Apply Digital offers a hybrid-friendly work environment with remote options available, comprehensive benefits including private healthcare coverage, contributions to Provident fund, gratuity bonus, flexible vacation policy, engaging projects with international brands, and a commitment to building an inclusive and safe workplace. The organization values equal opportunity and nurtures an inclusive environment where individual differences are celebrated and valued. Learning opportunities are abundant with generous training budgets, partner tech certifications, custom learning plans, workshops, mentorship, and peer support. Apply Digital invites passionate technical leaders who thrive in collaborative, fast-paced environments to join the team and drive innovation and best practices.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Data Scientist in the Global Data Science & Advanced Analytics team at Colgate-Palmolive, your role will involve leading projects within the Analytics Continuum. You will be responsible for conceptualizing and developing machine learning, predictive modeling, simulations, and optimization solutions to address business questions with clear dollar objectives. Your work will have a significant impact on revenue growth management, price elasticity, promotion analytics, and marketing mix modeling. Your responsibilities will include: - Conceptualizing and building predictive modeling solutions to address business use cases - Applying machine learning and AI algorithms to develop scalable solutions for business deployment - Developing end-to-end business solutions from data extraction to statistical modeling - Conducting model validations and continuous improvement of algorithms - Deploying models using Airflow and Docker on Google Cloud Platforms - Leading pricing, promotion, and marketing mix initiatives from scoping to delivery - Studying large datasets to discover trends and patterns - Presenting insights in a clear and interpretable manner to business teams - Developing visualizations using frameworks like Looker, PyDash, Flask, PlotLy, and streamlit - Collaborating closely with business partners across different geographies To qualify for this position, you should have: - A degree in Computer Science, Information Technology, Business Analytics, Data Science, Economics, or Statistics - 5+ years of experience in building statistical models and deriving insights - Proficiency in Python and SQL for coding and statistical modeling - Hands-on experience with statistical models such as linear regression, random forest, SVM, logistic regression, clustering, and Bayesian regression - Knowledge of GitHub, Airflow, and visualization frameworks - Understanding of Google Cloud and related services like Kubernetes and Cloud Build Preferred qualifications include experience with revenue growth management, pricing, marketing mix models, and third-party data. Knowledge of machine learning techniques and Google Cloud products will be advantageous for this role. Colgate-Palmolive is committed to fostering an inclusive environment where diversity is valued, and every individual is treated with respect. As an Equal Opportunity Employer, we encourage applications from candidates with diverse backgrounds and perspectives. If you require accommodation during the application process due to a disability, please complete the request form provided. Join us in building a brighter, healthier future for all.,

Posted 2 weeks ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

Capgemini Invent is the digital innovation, consulting, and transformation brand of the Capgemini Group, a global business line that combines market-leading expertise in strategy, technology, data science, and creative design to help CxOs envision and build what's next for their businesses. In this role, you should have developed/worked on at least one Gen AI project and have experience in data pipeline implementation with cloud providers such as AWS, Azure, or GCP. You should also be familiar with cloud storage, cloud database, cloud data warehousing, and Data lake solutions like Snowflake, BigQuery, AWS Redshift, ADLS, and S3. Additionally, a good understanding of cloud compute services, load balancing, identity management, authentication, and authorization in the cloud is essential. Your profile should include a good knowledge of infrastructure capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs. performance and scaling. You should be able to contribute to making architectural choices using various cloud services and solution methodologies. Proficiency in programming using Python is required along with expertise in cloud DevOps practices such as infrastructure as code, CI/CD components, and automated deployments on the cloud. Understanding networking, security, design principles, and best practices in the cloud is also important. At Capgemini, we value flexible work arrangements to provide support for maintaining a healthy work-life balance. You will have opportunities for career growth through various career growth programs and diverse professions tailored to support you in exploring a world of opportunities. Additionally, you can equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner with a rich heritage of over 55 years. We have a diverse team of 340,000 members in more than 50 countries, working together to accelerate the dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. Trusted by clients to unlock the value of technology, we deliver end-to-end services and solutions leveraging strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, cloud, and data, combined with deep industry expertise and partner ecosystem. Our global revenues in 2023 were reported at 22.5 billion.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Cloud Engineering Team Leader at GlobalLogic, you will be responsible for providing technical guidance and career development support to a team of cloud engineers. You will define cloud architecture standards and best practices across the organization, collaborating with senior leadership to develop a cloud strategy aligned with business objectives. Your role will involve driving technical decision-making for complex cloud infrastructure projects, establishing and maintaining cloud governance frameworks, and operational procedures. With a background in technical leadership roles managing engineering teams, you will have a proven track record of successfully delivering large-scale cloud transformation projects. Experience in budget management, resource planning, and strong presentation and communication skills for executive-level reporting are essential. Preferred certifications include Google Cloud Professional Cloud Architect, Google Cloud Professional Data Engineer, and additional relevant cloud or security certifications. You will leverage your 10+ years of experience in designing and implementing enterprise-scale Cloud Solutions using GCP services to architect sophisticated cloud solutions using Python and advanced GCP services. Leading the design and deployment of solutions utilizing Cloud Functions, Docker containers, Dataflow, and other GCP services will be part of your responsibilities. Ensuring optimal performance and scalability of complex integrations with multiple data sources and systems, implementing security best practices and compliance frameworks, and troubleshooting and resolving technical issues will be key aspects of your role. Your technical skills will include expert-level proficiency in Python with experience in additional languages, deep expertise with GCP services such as Dataflow, Compute Engine, BigQuery, Cloud Functions, and others, advanced knowledge of Docker, Kubernetes, and container orchestration patterns, extensive experience in cloud security, proficiency in Infrastructure as Code tools like Terraform, Cloud Deployment Manager, and CI/CD experience with advanced deployment pipelines and GitOps practices. As part of the GlobalLogic team, you will benefit from a culture of caring, continuous learning and development opportunities, interesting and meaningful work, balance and flexibility in work arrangements, and being part of a high-trust organization. You will have the chance to work on impactful projects, engage with collaborative teammates and supportive leaders, and contribute to shaping cutting-edge solutions in the digital engineering domain.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

maharashtra

On-site

As a Senior Specialist in Global Data Science at Colgate-Palmolive, you will play a crucial role in the GLOBAL DATA SCIENCE & ADVANCED ANALYTICS vertical. This department focuses on working on business cases with significant financial impacts for the company, providing solutions to business questions, recommended actions, and scalability options across markets. Your position as a Data Scientist will involve leading projects within the Analytics Continuum, where you will be responsible for conceptualizing and developing machine learning, predictive modeling, simulations, and optimization solutions aimed at achieving clear financial objectives for Colgate-Palmolive. Your responsibilities will include building predictive modeling solutions, applying ML and AI algorithms to analytics, developing end-to-end business solutions from data extraction to building business presentations, conducting model validations, and continuous improvement of algorithms. You will also deploy models using Airflow and Docker on Google Cloud Platforms, own Pricing and Promotion, Marketing Mix projects, and present insights to business teams in an easily interpretable manner. To qualify for this role, you should have a BE/BTECH in Computer Science or Information Technology, an MBA or PGDM in Business Analytics or Data Science, additional certifications in Data Science, or an MSC/MSTAT in Economics or Statistics. You should have at least 5 years of experience in building statistical models, hands-on experience with coding languages such as Python and SQL, and knowledge of visualization frameworks like PyDash, Flask, and PlotLy. Understanding of Cloud Frameworks like Google Cloud and Snowflake is essential. Preferred qualifications include experience in managing statistical models for Revenue Growth Management or Marketing Mix models, familiarity with third-party data sources, knowledge of machine learning techniques, and experience with Google Cloud products. At Colgate-Palmolive, we are committed to fostering an inclusive environment where individuals with diverse backgrounds and perspectives can thrive. Our goal is to develop talent that best serves our consumers globally and ensure that everyone feels a sense of belonging within our organization. We are an Equal Opportunity Employer dedicated to empowering all individuals to contribute meaningfully to our business.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You are NTT DATA, a global company striving to hire exceptional, innovative, and passionate individuals to grow with the organization. If you wish to be part of an inclusive, adaptable, and forward-thinking team, then this opportunity is for you. Currently, we are looking for a GCP BigQuery Developer to join our team in Hyderabad, Telangana (IN-TG), India (IN). As a Senior Application Developer in GCP, you should have mandatory skills in ETL Experience, Google Cloud Platform BigQuery, SQL, and Linux. Additionally, experience with Cloud Run and Cloud Functions would be desirable. We are seeking a Senior ETL Development professional with strong hands-on experience in Linux and SQL. While optional, experience or a solid conceptual understanding of GCP BigQuery is preferred. About NTT DATA: NTT DATA is a trusted global innovator of business and technology services with a value of $30 billion. We cater to 75% of the Fortune Global 100 and are dedicated to assisting clients in innovating, optimizing, and transforming for long-term success. Being a Global Top Employer, we have a diverse team of experts in over 50 countries and a robust partner ecosystem. Our services range from business and technology consulting to data and artificial intelligence solutions, industry-specific services, and the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is at the forefront of digital and AI infrastructure globally and is a part of the NTT Group, investing over $3.6 billion annually in R&D to facilitate a confident and sustainable transition into the digital future. Visit us at us.nttdata.com.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

indore, madhya pradesh

On-site

As a GCP Cloud Engineer at Ascentt, you will play a crucial role in designing, deploying, and managing cloud infrastructure on Google Cloud Platform to provide scalable solutions for our development teams. Your expertise will contribute to turning enterprise data into real-time decisions using advanced machine learning and GenAI, with a focus on solving hard engineering problems with real-world industry impact. Your key responsibilities will include designing and managing GCP infrastructure such as Compute Engine, GKE, Cloud Run, and networking components. You will be expected to implement CI/CD pipelines and infrastructure as code, preferably using Terraform, and configure monitoring, logging, and security using Cloud Operations Suite. Automation of deployments and maintenance of disaster recovery procedures will be essential aspects of your role, along with collaborating closely with development teams on architecture and troubleshooting. To excel in this role, you should possess at least 5 years of GCP experience with core services like Compute, Storage, Cloud SQL, and BigQuery. Strong knowledge of Kubernetes, Docker, and networking is essential, along with proficiency in Terraform and scripting languages such as Python and Bash. Experience with CI/CD tools, cloud migrations, and GitHub is required, and holding GCP Associate/Professional certification would be advantageous. A Bachelor's degree or equivalent experience is also necessary to succeed in this position. Additionally, preferred skills for this role include experience with multi-cloud environments like AWS/Azure, familiarity with configuration management tools such as Ansible and Puppet, database administration knowledge, and expertise in cost optimization strategies. If you are a passionate builder looking to shape the future of industrial intelligence through cutting-edge data analytics and AI/ML solutions, Ascentt welcomes your application to join our team and make a significant impact in the automotive and manufacturing industries.,

Posted 2 weeks ago

Apply

8.0 - 11.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Job Description: Job Profile for Python Developer Microservices (GCP Cloud Run)Key Responsibilities: Design, develop, and maintain Python-based microservices that are scalable, efficient, and secure.Deploy and manage containerized applications on GCP Cloud Run. Collaborate with cross-functional teams to define, design, and ship new features. Optimize applications for maximum speed and scalability.Implement CI/CD pipelines for automated testing and deployment. Monitor and troubleshoot production applications to ensure high availability and performance. Write clean, maintainable code and ensure adherence to coding standards. Stay updated with industry trends and emerging technologies related to microservices and cloud computing.Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modeling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices.Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written.Preferred Qualifications: Experience with cloud platforms Google is a plus.Familiarity with Python frameworks (Flask, FastAPI, Django).Understanding of DevOps practices and tools (Terraform, Jenkins).Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver).

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Full-Stack AI App Developer at EMO Energy, you will play a key role in reimagining urban mobility, energy, and fleet operations through our AI-driven super app. You will have the opportunity to take full ownership of building and deploying a cutting-edge energy infrastructure startup in India. Your responsibilities will include architecting and developing a full-stack AI-enabled application, designing modular frontend views using React.js or React Native, creating intelligent agent interfaces, building secure backend APIs for managing energy and fleet operations, integrating real-time data workflows, implementing fleet tracking dashboards, and optimizing performance across various platforms. Collaboration with the founding team, ops team, and hardware teams will be essential to iterate fast and solve real-world logistics problems. The ideal candidate for this role should have a strong command of front-end frameworks such as React.js, experience with back-end technologies like FastAPI, Node.js, or Django, proficiency in TypeScript or Python, familiarity with GCP services, Docker, GitHub Actions, and experience with mobile integrations and AI APIs. End-to-end ownership of previous applications, strong UI/UX product sensibility, and experience in building dashboards or internal tools will be valuable assets. Additionally, the ability to adapt to ambiguity, communicate technical decisions to non-engineers, and a passion for clean code and impactful work are crucial for success in this role. If you are a highly motivated individual with a passion for AI-driven applications and a desire to lead the development of a cutting-edge fleet/energy platform, then this role at EMO Energy is the perfect opportunity for you. Join us in revolutionizing the future of urban mobility and energy infrastructure in India.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

You will be joining NTT DATA as a GCP BigQuery Developer in Hyderabad, Telangana, India. As a Sr. Application Developer for GCP, your primary responsibilities will include utilizing your ETL experience, expertise in Google Cloud Platform BigQuery, SQL, and Linux. In addition to the mandatory skills, having experience in Cloud Run and Cloud Functions would be beneficial for this role. We are looking for a Senior ETL Developer with a strong hands-on background in Linux and SQL. While experience in GCP BigQuery is preferred, a solid conceptual understanding is required at a minimum. NTT DATA is a trusted global innovator providing business and technology services to 75% of the Fortune Global 100 companies. As a Global Top Employer, we have experts in over 50 countries and a robust partner ecosystem. Our services cover consulting, data and artificial intelligence, industry solutions, as well as application, infrastructure, and connectivity development, implementation, and management. Join us to be a part of our commitment to helping clients innovate, optimize, and transform for long-term success. Visit us at us.nttdata.com to learn more about our contributions to digital and AI infrastructure worldwide.,

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

karnataka

On-site

You should have 6 months to 3 years of IT experience. You must have knowledge of Bigquery, SQL, or similar tools. It is essential to be aware of ETL and Data warehouse concepts. Your oral and written communication skills should be good. Being a great team player and able to work efficiently with minimal supervision is crucial. You should also have good knowledge of Java or Python to conduct data cleansing. Preferred qualifications include good communication and problem-solving skills. Experience on Spring Boot would be an added advantage. Being an Apache Beam developer with Google Cloud BigTable and Google BigQuery is desirable. Experience in Google Cloud Platform (GCP) is preferred. Skills in writing batch and stream processing jobs using Apache Beam Framework (Dataflow) are a plus. Knowledge of Microservices, Pub/Sub, Cloud Run, and Cloud Function would be beneficial.,

Posted 2 weeks ago

Apply

3.0 - 5.0 years

13 - 14 Lacs

Hyderabad

Work from Office

Job Summary We are looking for a DevOps Engineer (GCP) with 3 to 5 years of experience for a full-time 1-year contract role. The ideal candidate will have strong hands-on experience with Google Cloud Platform, CI/CD pipelines, Docker/Kubernetes, and Infrastructure-as-Code. You'll work closely with backend and QA teams to build scalable, reliable cloud infrastructure and streamline deployments in a growth-oriented, tech-driven environment. Role & responsibilities Build, configure, and manage cloud infrastructure on Google Cloud Platform (GCP) Implement and maintain CI/CD pipelines for smooth and automated deployments. Set up and manage monitoring and logging solutions to ensure system visibility. Ensure prominent levels of application scalability, reliability, and performance. Collaborate with backend developers and QA teams to streamline the deployment process. Follow Infrastructure-as-Code practices using tools like Terraform or GCP Deployment Manager Manage containerized applications using Docker and orchestrate with Kubernetes(GKE) Mandatory Skills: Strong hands-on experience with Google Cloud Platform (GCP) services such as Compute, IAM, VPC, Cloud Run/Cloud Functions Experience with Infrastructure-as-Code (IaC) tools like Terraform or GCP Deployment Manager Proficiency in setting up and managing CI/CD pipelines using tools like GitHub Actions, Jenkins, or similar. Experience with Docker for containerization and Kubernetes (GKE) for orchestration Application Process Submit your resume with the subject line: DevOps Engineer Application [Your Name]” to recruitmentdesk@compileinfy.com.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

27 - 35 Lacs

Bengaluru

Work from Office

Job Overview We are hiring a seasoned Site Reliability Engineer with strong experience in building and operating scalable systems on Google Cloud Platform (GCP). You will be responsible for ensuring system availability, performance, and security in a complex microservices ecosystem, while collaborating cross-functionally to improve infrastructure reliability and developer velocity. Key Responsibilities - Design and maintain highly available, fault-tolerant systems on GCP using SRE best practices. - Implement SLIs/SLOs, monitor error budgets, and lead post-incident reviews with RCA documentation. - Automate infrastructure provisioning (Terraform/Deployment Manager) and CI/CD workflows. - Operate and optimize Kubernetes (GKE) clusters including autoscaling, resource tuning, and HPA policies. - Integrate observability across microservices using Prometheus, Grafana, Stackdriver, and OpenTelemetry. - Manage and fine-tune databases (MySQL/Postgres/BigQuery/Firestore) for performance and cost. - Improve API reliability and performance through Apigee (proxy tuning, quota/policy handling, caching). - Drive container best practices including image optimization, vulnerability scanning, and registry hygiene. - Participate in on-call rotations, capacity planning, and infrastructure cost reviews. Must-Have Skills - Minimum 8 years of total experience, with at least 3 years in SRE, DevOps, or Platform Engineering roles. - Strong expertise in GCP services (GKE, IAM, Cloud Run, Cloud Functions, Pub/Sub, VPC, Monitoring). - Advanced Kubernetes knowledge: pod orchestration, secrets management, liveness/readiness probes. - Experience in writing automation tools/scripts in Python, Bash, or Go. - Solid understanding of incident response frameworks and runbook development. - CI/CD expertise with GitHub Actions, Cloud Build, or similar tools. Good to Have - Apigee hands-on experience: API proxy lifecycle, policies, debugging, and analytics. - Database optimization: index tuning, slow query analysis, horizontal/vertical sharding. - Distributed monitoring and tracing: familiarity with Jaeger, Zipkin, or GCP Trace. - Service Mesh (Istio/Linkerd) and secure workload identity configurations. - Exposure to BCP/DR planning, infrastructure threat modeling, and compliance (ISO/SOC2). Educational & Certification Requirements - B.Tech / M.Tech / MCA in Computer Science or equivalent. - GCP Professional Cl

Posted 3 weeks ago

Apply

8.0 - 13.0 years

22 - 27 Lacs

Chennai

Work from Office

Role & responsibilities As a Software Engineer on our team, you will be instrumental in developing and maintaining key features for our applications. You'll be involved in all stages of the software development lifecycle, from design and implementation to testing and deployment. P

Posted 3 weeks ago

Apply

18.0 - 22.0 years

25 - 30 Lacs

Pune

Work from Office

Treasury Technology are responsible for the design, build and operation of Deutsche Banks Treasury trading, Balance-sheet Management and Liquidity Reporting ecosystem. In partnership with the Treasury business we look to deliver innovative technology solutions that will enable business to gain competitive edge and operational efficiency. This is a Global role to lead the Engineering function for Treasury Engineering product portfolio. This aim is to develop best in class portfolio consisting of following products: Liqudity Measurement and Management Issuance and Securitization Risk in Banking Book Funds Transfer Pricing Treasury is about managing the money and financial risks in a business. This involves making sure the business has the capital it needs to manage its day-to-day business obligations, while helping develop its long term financial strategy and policies. Economic factors such as interest rate rises, changes in regulations and volatile foreign exchange rates can have a serious impact on any business. Treasuey is responsobile to monitor and assess market conditions and put strategies in place to mitigate any potential financial risks to the business. As a senior leader in Software Engineering, you will lead a highly inspired and inquisitive team of technologists to develop applications to the highest standards. You will be expected to solve complex business and technical challenges while managing a large and senior business stakeholders. You will build an effective and trusted global engineering capability that can deliver consistently against the business ambitions. You are expected to take ownership of the quality of the platform, dev automation, agile processes and production resiliency. Position Specific Responsibilities and Accountabilities: Lead the Global Engineering function across our strategic locations based at Pune, Buchrest, London and New York Communicate with senior business stakeholders with regards to the vision and business goals. Provide transparency to program status, and manage risks and issues Lead a culture of innovation and experimentation, support full software development lifecycle that incorporates the best of technology approaches and delivery methodologies Ensure on time product releases that are of high quality, enabling the core vision of next generation trade processing systems compliant with regulatory requirements Lead development of next generation of cloud enabled platforms which includes modern web frameworks and complex transaction processing systems leveraging a broad set of technology stacks Experience in building fault-tolerant, low-latency, scalable solutions that are performed at a global enterprise scale Implement the talent strategy for engineering aligned to the broader Treasury Technology talent strategy & operating model Develop application with industry best practise using DevOps and automated deployment and testing framework Skills Matrix: Education Qualifications: Degree from an accredited college or university (or equivalent certification and/or relevant work experience). Business Analysis and SME Experience: 18+ years experience in the following areas: Well-developed requirements analysis skills, including good communication abilities (both speaking and listening) and stakeholder management (all levels up to Managing Director). Experience working with Front Office business teams highly desirable Experience in IT delivery or architecture including experience as an Application Developer and people manager Strong object-oriented design skills Previous experience hiring, motivating and managing internal and vendor teams. Technical Experience Mandatory Skills: Java, ideally Spark and Scala Oracle PostGres other Database technologies Experience developing microservices based architectures UI design and implementation Business Process management tools (e.g.g JBPM, IBM BPN) Experience with a range of BI technologies including Tableau Experience with DevOps best practices (DORA), CI/CD Experience in application security, scalability, performance tuning and optimization (NFRs) Experience in API designing, sound knowledge of micro services, containerization (Docker), exposure to federated and NoSQL DB. Experience in database query tuning and optimization Experience in implementing Devops best practices including CI CD, Implementing API testing automation. Experience working in an Agile based team ideally Scrum Desirable skills: Experience with Cloud Services Platforms in particular Google Cloud, and internal cloud based development (Cloud Run, Cloud Composer, Cloud SQL, Docker, K8s) Industry Domain Experience Hands-on knowledge of enterprise technology platforms supporting Front Office, Finance and/or Risk domains would be a significant advantage, as would experience or interest in Sustainable Finance. For example: Knowledge of the Finance/controlling domain and end-to-end workflow for a banking & trading businesses. High level understanding of financial products across Investment, Corporate and Private/Retail banking, in particular Loans. Knowledge of the investment banking, sales & trading, asset management and similar industries is a strong advantage. Clear Thought & Leadership A mindset built on simplicity A clear understanding of the concept of re-use in software development, and the drive to apply it relentlessly Proficiency to talk in functional and data terms to clients, embedded architects and senior managers Technical Leadership skills Ability to work in a fast paced environment with competing and alternating priorities with a constant focus on delivery. Proven ability to balance business demands and IT fulfillment in terms of standardisation, reducing risk and increasing IT flexibility. Logical & structured approach to problem-solving in both near-term (tactical) and mid-long term (strategic) horizons. Communication: Good verbal as well as written communication and presentation capabilities. Good team player facilitator-negotiator networker. Able to lead senior managers towards common goals and build consensus across a diverse group. Able to lead and influence a diverse team from a range of technical and non-technical backgrounds.

Posted 1 month ago

Apply

4.0 - 8.0 years

3 - 12 Lacs

Hyderabad, Telangana, India

On-site

4 to 8 years of experience as a DevOps Engineer We will align with Clients existing agile methodology, terminology and backlog management tool DevOps Tools have been chosen (Jenkins, Github, etc.) and agreed with developers, some specific tools e.g. scanning may need to be selected and acquired. Focus is on native GCP services (GKE and no 3rd party or OSS platforms) GCP will be provisioned as code using Terraform Initial dev sandbox for October will have limited platform capabilities (enable development with MVP) Security testing & scanning tools will be decided during discovery Databases will be re-platformed to Cloud SQL Environment will be specified for up to 8 initial microservices (release plan finalized before end of Discovery) Design will include multi-zone, not multi-region Essential functions The primary objective of this project is to unify the e-commerce experience into a single, best-of-breed platform that not only caters to the current needs but also sets the stage for seamless migration of other e-commerce experiences in the future Qualifications - GCP - Kubernetes - Terraform - Cloud Run - Ansible - GKE Would be a plus

Posted 1 month ago

Apply

3.0 - 5.0 years

14 - 20 Lacs

Bengaluru

Work from Office

Strong in Python with libraries such as polars, pandas, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers • Must have: Deep understanding of modern recommendation systems including two-tower , multi-tower , and cross-encoder architectures • Must have: Hands-on experience with deep learning for recommender systems using TensorFlow , Keras , or PyTorch • Must have: Experience generating and using text and image embeddings (e.g., CLIP , ViT , BERT , Sentence Transformers ) for content-based recommendations • Must have: Experience with semantic similarity search and vector retrieval for matching user-item representations • Must have: Proficiency in building embedding-based retrieval models , ANN search , and re-ranking strategies • Must have: Strong understanding of user modeling , item representations , temporal/contextual personalization • Must have: Experience with Vertex AI for training, tuning, deployment, and pipeline orchestration • Must have: Experience designing and deploying machine learning pipelines on Kubernetes (e.g., using Kubeflow Pipelines , Kubeflow on GKE , or custom Kubernetes orchestration ) • Should have experience with Vertex AI Matching Engine or deploying Qdrant , F AISS , ScaNN , on GCP for large-scale retrieval • Should have experience working with Dataproc (Spark/PySpark) for feature extraction, large-scale data prep, and batch scoring • Should have a strong grasp of cold-start problem solving using metadata and multi-modal embeddings • Good to have: Familiarity with Multi-Modal Retrieval Models combining text, image, and tabular features • Good to have: Experience building ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate re-ranking • Must have: Knowledge of recommender metrics (Recall@K, nDCG, HitRate, MAP) and offline evaluation frameworks • Must have: Experience running A/B tests and interpreting results for model impact • Should be familiar with real-time inference using Vertex AI , Cloud Run , or TF Serving • Should understand feature store concepts , embedding versioning , and serving pipelines • Good to have: Experience with streaming ingestion (Pub/Sub, Dataflow) for updating models or embeddings in near real-time • Good to have: Exposure to LLM-powered ranking or personalization , or hybrid recommender setups • Must follow MLOps practices version control, CI/CD, monitoring, and infrastructure automation GCP Tools Experience: ML & AI : Vertex AI, Vertex Pipelines, Vertex AI Matching Engine, Kubeflow on GKE, AI Platform Embedding & Retrieval : Matching Engine, FAISS, ScaNN, Qdrant, GKE-hosted vector DBs (Milvus) Storage : BigQuery, Cloud Storage, Firestore Processing : Dataproc (PySpark), Dataflow (batch & stream) Ingestion : Pub/Sub, Cloud Functions, Cloud Run Serving : Vertex AI Online Prediction, TF Serving, Kubernetes-based custom APIs, Cloud Run CI/CD & IaC : GitHub Actions, GitLab CI

Posted 1 month ago

Apply

4.0 - 6.0 years

5 - 8 Lacs

Gurugram

Work from Office

Required Skills: Strong expertise in NestJS framework. Proficient in building and managing Microservices architecture. Hands-on experience with Apache Kafka for real-time data streaming and messaging. Experience with Google Cloud Platform (GCP) services, including but not limited to Cloud Functions, Cloud Run, Pub/Sub, BigQuery, and Kubernetes Engine. Familiarity with RESTful APIs, database systems (SQL/NoSQL), and performance optimization. Solid understanding of version control systems, particularly Git. Preferred Skills: Knowledge of containerization using Docker. Experience with automated testing frameworks and methodologies. Understanding of monitoring, logging, and observability tools and practices. Responsibilities: Design, develop, and maintain backend services using NestJS within a microservices architecture. Implement robust messaging and event-driven architectures using Kafka. Deploy, manage, and optimize applications and services on Google Cloud Platform. Ensure high performance, scalability, reliability, and security of backend services. Collaborate closely with front-end developers, product managers, and DevOps teams. Write clean, efficient, and maintainable code, adhering to best practices and coding standards. Perform comprehensive testing and debugging, addressing production issues promptly. Job Location is in Office & based out of Gurgaon Selected candidate needs to have own Laptop

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 15 Lacs

Pune

Work from Office

DevOps Engineer (Google Cloud Platform) About Us IntelligentDX is a dynamic and innovative company dedicated to changing the Software landscape in the Healthcare industry. We are looking for a talented and experienced DevOps Engineer to join our growing team and help us build and maintain our scalable, reliable, and secure cloud infrastructure on Google Cloud Platform. Job Summary We are seeking a highly skilled DevOps Engineer with 4 years of hands-on experience, specifically with Google Cloud technologies. The ideal candidate will be responsible for designing, implementing, and maintaining our cloud infrastructure, ensuring the scalability, reliability, and security of our microservices-based software services. You will play a crucial role in automating our development and deployment pipelines, managing cloud resources, and supporting our engineering teams in delivering high-quality applications. Responsibilities Design, implement, and manage robust, scalable, and secure cloud infrastructure on Google Cloud Platform (GCP). Implement and enforce best practices for GCP Identity and Access Management (IAM) to ensure secure access control. Deploy, manage, and optimize applications leveraging Google Cloud Run for serverless deployments. Configure and maintain Google Cloud API Gateway for efficient and secure API management. Implement and monitor security measures across our GCP environment, including network security, data encryption, and vulnerability management. Manage and optimize cloud-based databases, primarily Google Cloud SQL, ensuring data integrity, performance, and reliability. Lead the setup and implementation of new applications and services within our GCP environment. Troubleshoot and resolve issues related to Cross-Origin Resource Sharing (CORS) configurations and other API connectivity problems. Provide ongoing API support to development teams, ensuring smooth integration and operation. Continuously work on improving the scalability and reliability of our software services, which are built as microservices. Develop and maintain CI/CD pipelines to automate software delivery and infrastructure provisioning. Monitor system performance, identify bottlenecks, and implement solutions to optimize resource utilization. Collaborate closely with development, QA, and product teams to ensure seamless deployment and operation of applications. Participate in on-call rotations to provide timely support for critical production issues. Qualifications Required Skills & Experience Minimum of 4 years of hands-on experience as a DevOps Engineer with a strong focus on Google Cloud Platform (GCP). Proven expertise in GCP services, including: GCP IAM: Strong understanding of roles, permissions, service accounts, and best practices. Cloud Run: Experience deploying and managing containerized applications. API Gateway: Experience in setting up and managing APIs. Security: Solid understanding of cloud security principles, network security (VPC, firewall rules), and data protection. Cloud SQL: Hands-on experience with database setup, management, and optimization. Demonstrated experience with the setup and implementation of cloud-native applications. Familiarity with addressing and resolving CORS issues. Experience providing API support and ensuring API reliability. Deep understanding of microservices architecture and best practices for their deployment and management. Strong commitment to building scalable and reliable software services. Proficiency in scripting languages (e.g., Python, Bash) and automation tools. Experience with Infrastructure as Code (IaC) tools (e.g., Terraform, Cloud Deployment Manager). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Excellent problem-solving skills and a proactive approach to identifying and resolving issues. Strong communication and collaboration abilities. Preferred Qualifications GCP certification (e.g., Professional Cloud DevOps Engineer, Professional Cloud Architect). Experience with monitoring and logging tools (e.g., Cloud Monitoring, Cloud Logging, Prometheus, Grafana). Knowledge of other cloud platforms (AWS, Azure) is a plus. Experience with Git and CI/CD platforms (e.g., GitLab CI, Jenkins, Cloud Build). What We Offer Health insurance, paid time off, and professional development opportunities. Fun working environment Flattened hierarchy, where everyone has a say Free snacks, games, and happy hour outings If you are a passionate DevOps Engineer with a proven track record of building and managing robust systems on Google Cloud Platform, we encourage you to apply!

Posted 1 month ago

Apply

8.0 - 10.0 years

19 - 34 Lacs

Bengaluru

Work from Office

Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family. Role: Python with microservices developer Experience: 8 to 10 years Interview Location: Bangalore Key Responsibilities: Design, develop, and maintain Python-based microservices that are scalable, efficient, and secure. Deploy and manage containerized applications Collaborate with cross-functional teams to define, design, and ship new features. Optimize applications for maximum speed and scalability. Implement CI/CD pipelines for automated testing and deployment. Monitor and troubleshoot production applications to ensure high availability and performance. Write clean, maintainable code and ensure adherence to coding standards. Stay updated with industry trends and emerging technologies related to microservices and cloud computing. Requirements: Proven experience as a Python developer, specifically in developing microservices. Strong understanding of containerization and orchestration (Docker, Kubernetes). Experience with Google Cloud Platform, specifically Cloud Run, Cloud Functions, and other related services. Familiarity with RESTful APIs and microservices architecture. Knowledge of database technologies (SQL and NoSQL) and data modelling. Proficiency in version control systems (Git). Experience with CI/CD tools and practices. Strong problem-solving skills and the ability to work independently and collaboratively. Excellent communication skills, both verbal and written. Preferred Qualifications: Experience with cloud platforms is a plus. Familiarity with Python frameworks (Flask, FastAPI, Django). Understanding of DevOps practices and tools (Terraform, Jenkins). Knowledge of monitoring and logging tools (Prometheus, Grafana, Stackdriver).

Posted 1 month ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Bengaluru

Hybrid

Responsibilities: Design and implement secure architecture on Google Cloud Platforms (GCP) using IAM, SDLC, CI/CD pipelines with Python or Java.

Posted 1 month ago

Apply

9.0 - 12.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Excellent knowledge on GCP Cloud Run, Cloud Task, Cloud Pub Sub & Cloud Storage. Handson with Python or NodeJs coding for application development Understanding of GCP service choices based on SLAs, scalability, compliance, and integration needs Proficiency in understanding trade-offs between choosing either of the services (e.g., why Cloud Run with 4 vCPUs over GKE with GPU). Deep understanding of concurrency settings, DB pool strategies, scaling Proficiency in implementing resilient, cost-optimized, low latency, enterprise-grade cloud solutions Proficient in suggesting configuration, predictive autoscaling, concurrency, cold start mitigation, failover etc for the different GCP services as per business needs Experience in Micro Services architecture and development. Able to configure and build systems, not just stitch them together Must be able to root cause pipeline latency, scaling issues, errors and downtime Cross-functional leadership during architecture definition, implementation, and rollout.

Posted 1 month ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies