Jobs
Interviews

130 Cloud Functions Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will have the opportunity to work at Capgemini, a company that empowers you to shape your career according to your preferences. You will be part of a collaborative community of colleagues worldwide, where you can reimagine what is achievable and contribute to unlocking the value of technology for leading organizations to build a more sustainable and inclusive world. Your Role: - You should have a very good understanding of current work, tools, and technologies being used. - Comprehensive knowledge and clarity on Bigquery, ETL, GCS, Airflow/Composer, SQL, Python are required. - Experience with Fact and Dimension tables, SCD is necessary. - Minimum 3 years of experience in GCP Data Engineering is mandatory. - Proficiency in Java/ Python/ Spark on GCP, with programming experience in Python, Java, or PySpark, SQL. - Hands-on experience with GCS (Cloud Storage), Composer (Airflow), and BigQuery. - Ability to work with handling big data efficiently. Your Profile: - Strong data engineering experience using Java or Python programming languages or Spark on Google Cloud. - Experience in pipeline development using Dataflow or Dataproc (Apache Beam etc). - Familiarity with other GCP services or databases like Datastore, Bigtable, Spanner, Cloud Run, Cloud Functions, etc. - Possess proven analytical skills and a problem-solving attitude. - Excellent communication skills. What you'll love about working here: - You can shape your career with a range of career paths and internal opportunities within the Capgemini group. - Access to comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage, or new parent support via flexible work. - Opportunity to learn on one of the industry's largest digital learning platforms with access to 250,000+ courses and numerous certifications. About Capgemini: Capgemini is a global business and technology transformation partner, helping organizations accelerate their dual transition to a digital and sustainable world while creating tangible impact for enterprises and society. With a diverse team of over 340,000 members in more than 50 countries, Capgemini leverages its over 55-year heritage to unlock the value of technology for clients across the entire breadth of their business needs. The company delivers end-to-end services and solutions, combining strengths from strategy and design to engineering, fueled by market-leading capabilities in AI, generative AI, cloud, and data, along with deep industry expertise and a strong partner ecosystem.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

telangana

On-site

As a Data Engineer at our company, you will play a crucial role in managing and optimizing data processes. Your responsibilities will include: - Designing and developing data pipelines using Python programming - Leveraging GCP services such as Dataflow, Dataproc, BigQuery, Cloud Storage, and Cloud Functions - Implementing data warehousing concepts and technologies - Performing data modeling and ETL processes - Ensuring data quality and adhering to data governance principles To excel in this role, you should possess the following qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field - 5-7 years of experience in data engineering - Proficiency in Python programming - Extensive experience with GCP services - Familiarity with data warehousing and ETL processes - Strong understanding of SQL and database technologies - Experience in data quality and governance - Excellent problem-solving and analytical skills - Strong communication and collaboration abilities - Ability to work independently and in a team environment - Familiarity with version control systems like Git If you are looking to join a dynamic team and work on cutting-edge data projects, this position is perfect for you.,

Posted 4 days ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Join Us At Vodafone, we're not just shaping the future of connectivity for our customers - we're shaping the future for everyone who joins our team. When you work with us, you're part of a global mission to connect people, solve complex challenges, and create a sustainable and more inclusive world. If you want to grow your career whilst finding the perfect balance between work and life, Vodafone offers the opportunities to help you belong and make a real impact. What you'll do Conduct end-to-end impact assessments across all subject areas for new demands. Create and maintain comprehensive data architecture documentation including data models, flow diagrams, and technical specifications. Design and implement data pipelines integrating multiple sources, ensuring consistency and quality. Collaborate with business stakeholders to align data strategies with organisational goals. Support software migration and perform production checks. Govern the application of architecture principles within projects. Manage database refresh and decommissioning programmes while maintaining service availability. Ensure correct database configuration and documentation of infrastructure changes. Support third-level supplier engineering teams in root cause analysis and remediation. Propose system enhancements and innovative solutions. Who you are You are a detail-oriented and collaborative professional with a strong foundation in data architecture and cloud technologies. You possess excellent communication skills and are comfortable working with both technical and non-technical stakeholders. You are passionate about creating scalable data solutions and contributing to a culture of continuous improvement. What skills you need Strong knowledge of Teradata systems and related products. Proficient in SQL and data modelling concepts. Experience with GCP tools including Cloud Composer, BigQuery, Pub/Sub, and Cloud Functions. Proven ability to communicate complex data concepts effectively. Experience in IT infrastructure management environments. Ability to influence stakeholders and drive customer satisfaction. What skills you will learn Advanced cloud architecture and data governance practices. Cross-functional collaboration and stakeholder engagement. Innovation in data pipeline design and optimisation. Exposure to global BI projects and scalable data solutions. Enhanced leadership and decision-making capabilities. Not a perfect fit Worried that you don't meet all the desired criteria exactly At Vodafone we are passionate about empowering people and creating a workplace where everyone can thrive, whatever their personal or professional background. If you're excited about this role but your experience doesn't align exactly with every part of the job description, we encourage you to still apply as you may be the right candidate for this role or another opportunity. What's in it for you Who we are We are a leading international Telco, serving millions of customers. At Vodafone, we believe that connectivity is a force for good. If we use it for the things that really matter, it can improve people's lives and the world around us. Through our technology we empower people, connecting everyone regardless of who they are or where they live and we protect the planet, whilst helping our customers do the same. Belonging at Vodafone isn't a concept it's lived, breathed, and cultivated through everything we do. You'll be part of a global and diverse community, with many different minds, abilities, backgrounds and cultures. We're committed to increase diversity, ensure equal representation, and make Vodafone a place everyone feels safe, valued and included. If you require any reasonable adjustments or have an accessibility request as part of your recruitment journey, for example, extended time or breaks in between online assessments, please refer to for guidance. Together we can.

Posted 5 days ago

Apply

2.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As a Senior Data Engineer with 5-8 years of IT experience, including 2-3 years focused on GCP data services, you will be a valuable addition to our dynamic data and analytics team. Your primary responsibility will be to design, develop, and implement robust and insightful data-intensive solutions using GCP Cloud services. Your role will entail a deep understanding of data engineering, proficiency in SQL, and extensive experience with various GCP services such as BigQuery, DataFlow, DataStream, Pub/Sub, Dataproc, Cloud Storage, and other key GCP services for Data Pipeline Orchestration. You will be instrumental in the construction of a GCP native cloud data platform. Key Responsibilities: - Lead and contribute to the development, deployment, and lifecycle management of applications on GCP, utilizing services like Compute Engine, Kubernetes Engine (GKE), Cloud Functions, Cloud Run, Pub/Sub, BigQuery, Cloud SQL, Cloud Storage, and more. Required Skills and Qualifications: - Bachelor's degree in Computer Science, Information Technology, Data Analytics, or a related field. - 5-8 years of overall IT experience, with hands-on experience in designing and developing data applications on GCP Cloud. - In-depth expertise in GCP services and architectures, including Compute, Storage & Databases, Data & Analytics, and Operations & Monitoring. - Proven ability to translate business requirements into technical solutions. - Strong analytical, problem-solving, and critical thinking skills. - Effective communication and interpersonal skills for collaboration with technical and non-technical stakeholders. - Experience in Agile development methodology. - Ability to work independently, manage multiple priorities, and meet deadlines. Preferred Skills (Nice to Have): - Experience with other Hyperscalers. - Proficiency in Python or other scripting languages for data manipulation and automation. If you are a highly skilled and experienced Data Engineer with a passion for leveraging GCP data services to drive innovation, we invite you to apply for this exciting opportunity in Gurugram or Hyderabad.,

Posted 6 days ago

Apply

0.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Title : Python support Engineer Must-have skills: Monitor and maintain the availability and GKE based applications in high-pressure production environment. Respond to and resolve incidents and service requests related to application functionality and performance. Collaborate with development teams to troubleshoot and resolve technical issues in a timely manner. Document support processes , procedures, and troubleshooting steps for future reference. Participate in on-call rotation as well as in off-hours to provide after-hours support as needed. Communicate effectively with stakeholders to provide updates on issue resolution and status. Should have experience with monitoring tools and incident management systems . Ability to analyze logs, identify patterns, and trace system failures . Solid experience in SQL and database querying for debugging and reporting. Experience in monitoring/alerting tools on GCP. Good to have: Strong in Python , with production-level experience. Strong in FastAPI development and deployment practices. Worked in Google Kubernetes Engine ( GKE ) - including workload deployment, autoscaling, and tuning . Must have GCP exp in - Cloud Functions, Pub/Sub, Dataflow, Composer, Bigtable and Bigquery.

Posted 6 days ago

Apply

1.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a GCP Data Engineer, you will play a crucial role in the development, optimization, and maintenance of data pipelines and infrastructure. Your proficiency in SQL and Python will be pivotal in the management and transformation of data. Moreover, your familiarity with cloud technologies will be highly beneficial as we strive to improve our data engineering processes. You will be responsible for building scalable data pipelines. This involves designing, implementing, and maintaining end-to-end data pipelines to efficiently extract, transform, and load (ETL) data from various sources. It is essential to ensure that these data pipelines are reliable, scalable, and performance-oriented. Your expertise in SQL will be put to use as you write and optimize complex SQL queries for data extraction, transformation, and reporting purposes. Collaboration with analysts and data scientists will be necessary to provide structured data for analysis. Experience with cloud platforms, particularly GCP services such as BigQuery, DataFlow, GCS, and Postgres, will be valuable. Leveraging cloud services to enhance data processing and storage capabilities, as well as integrating tools into the data ecosystem, will be part of your responsibilities. Documenting data pipelines, procedures, and best practices will be essential for knowledge sharing within the team. You will collaborate closely with cross-functional teams to understand data requirements and deliver effective solutions. The ideal candidate for this role should have at least 3 years of experience with SQL and Python, along with a minimum of 1 year of experience with GCP services like BigQuery, DataFlow, GCS, and Postgres. Additionally, 2+ years of experience in building data pipelines from scratch in a highly distributed and fault-tolerant manner is required. Comfort with a variety of relational and non-relational databases is essential. Proven experience in building applications in a data-focused role, both in Cloud and Traditional Data Warehouse environments, is preferred. Familiarity with CloudSQL, Cloud Functions, Pub/Sub, Cloud Composer, and a willingness to learn new tools and techniques are desired qualities. Furthermore, being comfortable with big data and machine learning tools and platforms, including open-source technologies like Apache Spark, Hadoop, and Kafka, will be advantageous. Strong oral, written, and interpersonal communication skills are crucial for effective collaboration in a dynamic environment with undefined problems. If you are an inquisitive, proactive individual with a passion for data engineering and a desire to continuously learn and grow, we invite you to join our team in Chennai, Tamil Nadu, India.,

Posted 6 days ago

Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Our Client in India is one of the leading providers of risk, financial services and business advisory, internal audit, corporate governance, and tax and regulatory services. Our Client was established in India in September 1993, and has rapidly built a significant competitive presence in the country. The firm operates from its offices in Mumbai, Pune, Delhi, Kolkata, Chennai, Bangalore, Hyderabad , Kochi, Chandigarh and Ahmedabad, and offers its clients a full range of services, including financial and business advisory, tax and regulatory. Our client has their client base of over 2700 companies. Their global approach to service delivery helps provide value-added services to clients. The firm serves leading information technology companies and has a strong presence in the financial services sector in India while serving a number of market leaders in other industry segments. Key Responsibilities: Azure Responsibilities: Automate deployment of Azure IaaS and PaaS services using Terraform. Develop and maintain modular, reusable Terraform modules for Azure infrastructure. Implement and manage CI/CD pipelines using GitHub Actions for Azure deployments. Automate operational tasks using PowerShell, Python, and Bash. Deploy and manage services such as Azure VMs, App Services, Azure App Gateway, Azure Functions, SQLDB, Azure SQL, Postgres DB, Mongo DB, AKS, Key Vault, APIM, and Azure OpenAI. Integrate Azure OpenAI capabilities into cloud-native applications and workflows. GCP Responsibilities: Automate deployment of GCP IaaS and PaaS services using Terraform. Build and maintain Terraform modules/libraries for scalable GCP infrastructure. Deploy and manage services like Cloud Run, Compute Engine, Cloud Functions, App Engine, GKE, BigQuery, and Cloud Storage. Integrate GCP services into CI/CD pipelines using GitHub Actions. Automate infrastructure and service provisioning using scripting languages. Required Skills & Qualifications: 3-5 years of experience with Azure infrastructure (IaaS & PaaS) automation and deployment. 3-5 years of experience with GCP infrastructure (IaaS & PaaS) automation and deployment. Proficiency in Terraform, including module/library development. Experience with GitHub Actions or similar CI/CD tools. Scripting skills in PowerShell, Python, and Bash. Hands-on experience with Azure API Management (APIM Experience GCP-native services. Understanding of cloud networking, security, and identity management. Strong problem-solving and communication skills.

Posted 1 week ago

Apply

4.0 - 8.0 years

3 - 6 Lacs

gurugram

Work from Office

This is Part time role, you can do it with your job. Responsibilities: * Design, develop, and maintain GCP data pipelines using BigQuery, Cloud Functions, and Cloud Run.

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

bengaluru, karnataka, india

On-site

DevOps and Full Stack Engineer, AS Position Overview Job Title: DevOps and Full Stack Engineer, AS Location: Bangalore, India Role Description The Infra & DevOps team within DWS India , sits horizontally over the project delivery, committed to provide best in class shared services across build, release and QA Automation space. Its main functional areas encompass Environment build, I ntegration of QA Automation suite, Release and Deployment Automation Management, Technology Management and Compliance Management. This role will be key to our programme delivery and include working closely with stakeholders including client Product Owners, Digital Design Organisation, Business Analysts, Developers and QA to advise and contribute from Infra and DevOps capability perspective by Building and maintaining non-prod and prod environments, setting up end to end alerting and monitoring for ease of operation and oversee transition of the project to L2 support teams as part of Go Live. We are looking for the best people to help create the next big thing in digital banking What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Develop and maintain automated test frameworks for APIs, UI, and integration workflows. Ability to develop microservices using Spring boot, Python based services in Cloud/GCP Implement and manage CI/CD pipelines on Google Cloud Platform (GCP) using Cloud Build, Cloud Functions, and related services. Utilize Gemini, GitHub Copilot, and OpenRewrite to accelerate test development, modernize codebases, and enforce best practices. Integrate tools like Dependabot, SonarQube, Veracode, and CodeQL to drive secure, high-quality code. Promote and apply shift-left testing strategies and DevSecOps principles across all stages of the SDLC. Collaborate cross-functionally to deliver scalable, intelligent automation capabilities embedded within engineering workflows. Your skills and experience Experience with Spring boot-based services in Cloud/GCP. Experience with test automation frameworks such as Selenium, Cypress, REST Assured, or Playwright. Deep understanding of DevOps and cloud-native delivery pipelines, especially using GCP. Hands-on with AI/ML tools in the development lifecycle, including Gemini, GitHub Copilot, and OpenRewrite. Familiar with DevSecOps tools: SonarQube, Veracode, CodeQL, Dependabot. Proficient in scripting (Python, Shell) and using version control systems like Git. Knowledge of Agile methodologies (Scrum, Kanban), TDD, and BDD. Experience with Infrastructure-as-Code (Terraform, GCP Deployment Manager) Skills Nice to have: Experience with GCP services such as DataFlow, Pub-Sub, Cloud Storage, BigQuery. Experience in Python, Pandas and AI Libraries (Langchain, Langgraph, OpenAI etc.) Stakeholder Communication: Ability to explain AI concepts to non-technical audiences and collaborate cross-functionally. Adaptability & Innovation: Flexibility in learning new tools and developing innovative solutions. Experience in GCP Vertex AI. Exposure to GKE, Docker, or Kubernetes.). Knowledge of performance/load testing tools (e.g., JMeter, k6). Relevant certifications in GCP, DevOps, Test Automation, or AI/ML. How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: We strive for a in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 week ago

Apply

4.0 - 7.0 years

10 - 20 Lacs

bengaluru

Work from Office

Key Responsibilities: Azure Responsibilities: Automate deployment of Azure IaaS and PaaS services using Terraform. Develop and maintain modular, reusable Terraform modules for Azure infrastructure. Implement and manage CI/CD pipelines using GitHub Actions for Azure deployments. Automate operational tasks using PowerShell, Python, and Bash. Deploy and manage services such as Azure VMs, App Services, Azure App Gateway, Azure Functions, SQLDB, Azure SQL, Postgres DB, Mongo DB, AKS, Key Vault, APIM, and Azure OpenAI. Integrate Azure OpenAI capabilities into cloud-native applications and workflows. GCP Responsibilities: Automate deployment of GCP IaaS and PaaS services using Terraform. Build and maintain Terraform modules/libraries for scalable GCP infrastructure. Deploy and manage services like Cloud Run, Compute Engine, Cloud Functions, App Engine, GKE, BigQuery, and Cloud Storage. Integrate GCP services into CI/CD pipelines using GitHub Actions. Automate infrastructure and service provisioning using scripting languages. Required Skills & Qualifications: 3-5 years of experience with Azure infrastructure (IaaS & PaaS) automation and deployment. 3-5 years of experience with GCP infrastructure (IaaS & PaaS) automation and deployment. Proficiency in Terraform, including module/library development. Experience with GitHub Actions or similar CI/CD tools. Scripting skills in PowerShell, Python, and Bash. Hands-on experience with Azure API Management (APIM Experience GCP-native services. Understanding of cloud networking, security, and identity management. Strong problem-solving and communication skills.

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

coimbatore, tamil nadu

On-site

You should have at least 3+ years of experience working as a Full Stack Developer. Your primary focus will be on backend development using Python, specifically Django, FastAPI, or Flask. In addition, you should have a strong proficiency in frontend development using React.js and JavaScript/TypeScript. It is essential that you have experience with AI scripting, ML model integration, or working with AI APIs such as OpenAI, TensorFlow, PyTorch, etc. You should also possess knowledge of RESTful API design and implementation, as well as database management with both SQL and NoSQL databases. Familiarity with Docker, Kubernetes, and CI/CD pipelines is required for this role. A strong understanding of software development principles, security best practices, and performance optimization is also necessary. Preferred skills for this position include knowledge of GraphQL for API development, experience with serverless architecture and cloud functions, and an understanding of natural language processing (NLP) and AI automation. Experience in data engineering, ETL pipelines, or big data frameworks would be a plus.,

Posted 1 week ago

Apply

3.0 - 6.0 years

0 Lacs

mumbai, maharashtra, india

On-site

Job Description: Qualifications: Bachelor&aposs degree in Engineering or related technical field with at least 3 years L30 / 6 years L35 / 9 years L40 of hands-on experience as a Data Engineer, Data Architect or related roles Experience working on Snowflake or Google Cloud Platform (GCP) especially services like BigQuery, Cloud Storage, Dataflow, Cloud Functions, and Pub/Sub Proficiency in Talend for complex ETL workflows, Fivetran for automated data pipeline build with understanding of modern ELT patterns and real-time data streaming concepts Advanced SQL skills including complex queries, stored procedures, etc.; Python with experience in data manipulation libraries and PySpark for large-scale data processing Understanding of REST API, building and consuming APIs for data ingestion and knowledge of API authentication methods Hands-on experience with Databricks for collaborative analytics or Notebooks of similar interactive development environments Understanding of data governance, quality, and lineage concepts; data security and compliance requirements (GDPR, CCPA) and knowledge of data warehouse modeling techniques Location: DGS India - Bengaluru - Manyata N1 Block Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

0 Lacs

gurgaon, haryana, india

On-site

Job Description At American Express, our culture is built on a 175-year history of innovation, shared and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: Building File-based and API based integration between systems using secure the transmission. Designing Micro Services & integration patterns to securely communicate with backend services and clients. Build Data pipelines to ingest data in GCP data warehouse Function as member of an Agile team by contributing to software builds through consistent development practices. Participate in code reviews. Quickly debug basic software components and identify code defects for remediation. Enable the deployment, support, and monitoring of software across test, integration, and production environments. Ensures timely completion and quality product, including documentation and other deliverables produced by engineering team. Identifies opportunities to adopt innovative & new technologies to solve existing business needs and predict future challenges. Must have experience collaborating with Product Owners on business process enhancements. Minimum Qualifications Bachelor's Degree in CS or CSE or Equivalent. 5-8 years of experience in Google cloud platform (GCP) Expertise in DataProc, Data Flow, PySpark, Big Query and Airflow/Cloud Composer. Strong understanding and expertise in Google Cloud Platform (GCP) with Python having worked with Cloud storage ,Cloud Functions and Cloud run Experience in Using GCP functions, Data Proc, Data Flow, DAGs for the data transformation Experience in building data pipelines and data transformations Develop and implement integration strategies to connect on-premises systems and applications with GCP services. Monitor and optimize the performance of integration solutions to ensure efficient data flow and minimal latency. Collaborate with migration teams to address integration challenges and ensure smooth transitions to GCP Build and configure APIs, connectors, and middleware to facilitate seamless data and application integration between GCP and existing systems. Create and maintain documentation for integration designs, configurations, and workflows. Experience with web services, open API development and its concepts. Preferred Qualifications Knowledge of Collaboration Tools (GitHub, Confluence, Rally). Experience in Continuous Integration and Deployment (Jenkins). Knowledge of GCP and Cloud hosting platforms. Good to have knowledge on Oracle Financials including General Ledger and Procure to Pay Agile/SAFe practices in building software We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

We are seeking a Senior Data Architect with over 7 years of experience in the field, specifically in data architecture roles. As a Senior Data Architect, your responsibilities will involve designing and implementing scalable, secure, and cost-effective data architectures utilizing Google Cloud Platform (GCP). You will play a key role in leading the design and development of data pipelines using tools such as BigQuery, Dataflow, and Cloud Storage. Additionally, you will be responsible for architecting and implementing data lakes, data warehouses, and real-time data processing solutions on GCP. It will be your duty to ensure that the data architecture is aligned with business objectives, governance, and compliance requirements. Collaboration with stakeholders to define data strategy and roadmap will be essential. Moreover, you will design and deploy BigQuery solutions for optimized performance and cost efficiency, as well as build and maintain ETL/ELT pipelines for large-scale data processing. Utilizing Cloud Pub/Sub, Dataflow, and Cloud Functions for real-time data integration, you will implement best practices for data security, privacy, and compliance in cloud environments. Integration of machine learning workflows with data pipelines and analytics tools will also be within your scope of work. Your expertise will be crucial in defining data governance frameworks and managing data lineage. Furthermore, you will lead data modeling efforts to ensure consistency, accuracy, and performance across systems. Optimizing cloud infrastructure for scalability, performance, and reliability will enable you to mentor junior team members and guarantee adherence to architectural standards. Collaboration with DevOps teams for the implementation of Infrastructure as Code (Terraform, Cloud Deployment Manager) will be part of your responsibilities. Ensuring high availability and disaster recovery solutions are integrated into data systems, conducting technical reviews, audits, and performance tuning for data solutions, and designing multi-region and multi-cloud data architecture solutions will be essential tasks. Staying updated on emerging technologies and trends in data engineering and GCP will be crucial to driving innovation in data architecture, including recommending new tools and services on GCP. Preferred qualifications include a Google Cloud Certification, with primary skills encompassing 7+ years of data architecture experience, expertise in BigQuery, Cloud Dataflow, Cloud Pub/Sub, Cloud Storage, and related GCP services, strong proficiency in SQL, Python, or other data processing languages, experience in cloud security, data governance, and compliance frameworks, strong problem-solving skills, and the ability to architect solutions for complex data environments. Leadership experience and excellent communication and collaboration skills are also highly valued. Role: Senior Data Architect Location: Trivandrum/Bangalore Close Date: 14-03-2025,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

The GCP Architect is responsible for designing and implementing scalable, robust, and secure cloud solutions on Google Cloud Platform. You need to have a deep understanding of cloud architecture and services, strong problem-solving skills, and the ability to lead cross-functional teams in delivering complex cloud projects. You should possess excellent leadership, communication, and interpersonal skills. Strong problem-solving abilities, attention to detail, and the capability to work in a fast-paced, dynamic environment while managing multiple priorities are essential. In terms of technical skills, you must have a strong understanding of GCP services such as Compute Engine, App Engine, Kubernetes Engine, Cloud Functions, and BigQuery. Proficiency in infrastructure-as-code tools like Terraform, as well as configuration tools such as Chef, Puppet, Ansible, and Salt, is required. Experience in deploying and managing applications with Kubernetes (GKE) and Docker, as well as Serverless architectures, is crucial. Knowledge of API management, CI/CD pipelines, DevOps practices, networking, security, and database management in a cloud environment is necessary. You should also have experience in building ETL pipelines using Dataflow, Dataproc, and BigQuery, as well as using Pub/Sub, Dataflow, and other real-time data processing services. Experience in implementing backup solutions and disaster recovery plans, designing and deploying applications with high availability and fault tolerance, and designing solutions that span across multiple cloud providers and on-premises infrastructure is expected. Key Responsibilities: Architectural Leadership: - Lead the design and development of cloud solutions on GCP. - Define and maintain the architectural vision to ensure alignment with business objectives. - Evaluate and recommend tools, technologies, and processes for the highest quality solutions. Solution Design: - Design scalable, secure, and cost-effective cloud architectures. - Develop proof-of-concept projects to validate proposed solutions. - Collaborate with stakeholders to understand business requirements and translate them into technical solutions. Implementation And Migration: - Oversee the implementation of Multi Cloud Solutions while meeting performance and reliability targets. - Lead heterogeneous cloud migration projects, ensuring minimal downtime and seamless transition with cloud agnostic tools as well as third-party toolsets. - Provide guidance and best practices for deploying and managing applications in GCP. Team Leadership And Collaboration: - Ensure no customer escalations. - Mentor and guide technical teams, fostering a culture of innovation and continuous improvement. - Collaborate with DevOps, Security, and Development teams to integrate cloud solutions. - Conduct training sessions and workshops to upskill teams on GCP services and best practices. Security And Compliance: - Ensure cloud solutions comply with security and regulatory requirements. - Implement and maintain security best practices, including identity and access management, data protection, and network security. Continuous Improvement: - Stay updated with the latest GCP services, features, and industry trends. - Continuously evaluate and improve cloud processes and architectures to enhance performance, reliability, and cost-efficiency.,

Posted 1 week ago

Apply

15.0 - 17.0 years

0 Lacs

pune, maharashtra, india

On-site

Lead Engineer, VP Position Overview Job Title: Lead Engineer - VP Location: Pune, India Role Description Engineer is responsible for managing or performing work across multiple areas of the bank's overall IT Platform/Infrastructure including analysis, development, and administration. It may also involve taking functional oversight of engineering delivery for specific departments. Planning and developing entire engineering solutions to accomplish business goals. Building reliability and resiliency into solutions with appropriate testing and reviewing throughout the delivery lifecycle Ensuring maintainability and reusability of engineering solutions. Ensuring solutions are well architected and can be integrated successfully into the end-to-end business process flow. Reviewing engineering plans and quality to drive re-use and improve engineering capability. Participating in industry forums to drive adoption of innovative technologies, tools and solutions in the Bank. Deutsche Bank's Corporate Bank division is a leading provider of cash management, trade finance and securities finance. We complete green-field projects that deliver the best Corporate Bank - Securities Services products in the world. Our team is diverse, international, and driven by shared focus on clean code and valued delivery. At every level, agile minds are rewarded with competitive pay, support, and opportunities to excel. You will work as part of a cross-functional agile delivery team. You will bring an innovative approach to software development, focusing on using the latest technologies and practices, as part of a relentless focus on business value. You will be someone who sees engineering as team activity, with a predisposition to open code, open discussion and creating a supportive, collaborative environment. You will be ready to contribute to all stages of software delivery, from initial analysis right through to production support. What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities As lead engineer you will be responsible for designing, building, implementing, and maintaining software applications using both Java and related technologies. Responsible for developing back-end services with Java, integrating APIs, ensuring cross platform optimization, and collaborating with team members to deliver high-quality software solutions. Should have proficiency in both Java, Spring boot and related tech stack as well as strong problem-solving skills and the ability to work in an Agile development environment. Your skills and experience 15+ years experience in implementing software applications using Java Technologies Have led a team of 5+ engineers. Hands-on experience with Java: A popular object-oriented programming language used for building scalable and robust backend services. Spring Boot: A framework for building Javabased enterprise applications, providing features such as dependency injection, MVC architecture, and RESTful web services. Hibernate or Spring Data JPA: Object-relational mapping (ORM) frameworks for simplifying database interactions and managing entity relationships. Spring Security: A framework for implementing authentication and authorization mechanisms in Spring-based applications. RESTful APIs: Architectural style for designing networked applications, allowing communication between the front-end and backend components. Database: SQl/PLSQL for commonly used databases include Oracle, PostgreSQL, or MongoDB, depending on the specific requirements of the application. Developer tools & Practices: Integrated Development Environment (IDE): Such as IntelliJ IDEA or Eclipse for Java development. Version Control: Git for managing source code and collaborating with team members. Build Tools: Maven or Gradle for managing dependencies and building Java projects. Testing Frameworks: JUnit for unit testing Java code and Selenium for automated browser testing. Agile Methodologies: Practices like Scrum or Kanban for iterative and collaborative software development. Continuous Integration/Continuous Deployment (CI/CD): Tools like Jenkins, Travis CI, or GitLab CI/CD for automating the build, testing, and deployment processes. Good working knowledge of various async messaging streams such as Kafka, Rabbit MQ, IBM MQs etc. Experience with building distributed large scale low latency application is desirable. Good understanding of implementing various design patterns to improve application performance. Good understanding of various Object-Oriented Design principles such as SOLID, DRY, KISS etc. Knowledge of Compute Engine for virtual machines, Cloud Storage for object storage, and Cloud Functions for serverless computing, GKE for GCP is desirable. Experience with container platforms management services such as docker is desirable. Knowledge of various workflow management tools such as Camunda, IBM BPM is nice to have. Strong stakeholder management skills and the ability to communicate at senior level. Proven experience of delivering results in matrixed organizations under pressure and tight timescales Excellent verbal, interpersonal and written communication skills. Bachelor's or Master's degree in computer science or a related field How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: We strive for a in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 week ago

Apply

4.0 - 7.0 years

7 - 17 Lacs

chennai, bengaluru, mumbai (all areas)

Hybrid

Job Title: Gcp Data Engineer Senior Associate Experience: 4 to 7 years Location: Pan India Notice period: Immediate to 30 days Job Title: Azure Data Engineer Experience: 4 to 7 years Location: Pan India Notice period: Immediate to 30 days Job Description Designing, building and deploying cloud solution for enterprise applications, with expertise in Cloud Platform Engineering. -Expertise in application migration projects including optimizing technical reliability and improving application performance -Good understanding of cloud security frameworks and cloud Security standards -Solid knowledge and extensive experience of GCP and its cloud services. -Experience with GCP services as Compute Engine, Dataproc, Dataflow, Big Query, Secret Manager, Kubernetes Engine and c. -Experiencing in Google storage products like Cloud Storage, Persistent Disk, Nearline, Coldline and Cloud Filestore -Experience in Database products like Datastore, Cloud SQL, Cloud Spanner & Cloud Bigtable -Experience with implementing containers using cloud native container orchestrators in GCP -Strong cloud programming skill with experience in API and Cloud Functions development using Python -Hands-on experience with enterprise config & DevOps tools including Ansible, BitBucket, Git, Jira, and Confluence. -Strong knowledge of cloud Security practices and Cloud IAM Policy preparation for GCP -Knowledge and experience in API development, AI/ML, Data Lake, Data Analytics, Cloud Monitoring tool like Stackdriver -Ability to participate in fast-paced DevOps and System Engineering teams within Scrum agile processes -Should have Understanding of data modelling, Data warehousing concepts -Understand the current application infrastructure and suggest changes to it. Job Description (Azure Data Engineer) :- 4 to 7 years Roles and Responsibilities: Responsible for data management activities related to the migration of on-prem sources to cloud system using Microsoft Azure PaaS Services and Azure Cloud Technologies, including the creation and use of ingestion pipelines, data lakes, cloud-based data marts & data warehouses, cloud-based semantic data services layer. Desired Candidate Profile: -8 to 12 years of professional experience with working knowledge in a Data and Analytics role with a Global organization -2+ years of experience in Azure cloud infrastructure for data services -Experience in leading development of Data and Analytics products, from Requirement Gathering State to Driving User Adoption -Hands-on experience of developing, deploying, and running cloud solutions on Azure services like Azure Data Lake Storage, Azure Data Lake Analytics, Azure Data Factory, Synapse. -Candidates with strong data transformation experience on ADF and ADB (Pyspark/Delta) are preferred -Strong proficiency in writing and optimizing SQL queries and working with databases -Ability to acquire specialized domain knowledge required to be more effective in all work activities -BI & Data-warehousing concepts are must. -Exposure to Azure Synapse will be good -Microsoft Azure Data engineer certified candidates are preferred

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Cloud Site Reliability Engineer (SRE) at our esteemed organization, you will be responsible for ensuring the reliability, scalability, and performance of our cloud infrastructure. With a team of highly skilled professionals, you will contribute to the maintenance and optimization of our cloud-based applications. This role offers a unique opportunity to work in a dynamic environment that encourages continuous learning and innovation. With a total of 6-10 years of experience in the IT industry, including a minimum of 2-3 years as a Cloud SRE/Engineer, you possess a Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Your expertise lies in managing applications that utilize services such as Cloud Build, Cloud Functions, GKE (Google Kubernetes Engine), Logging, Monitoring, GCS (Google Cloud Storage), CloudSQL, and IAM (Identity and Access Management). Your proficiency in Python, along with experience in a secondary language like Golang or Java, enables you to effectively manage codebases and configurations. You have a strong background in GKE/Kubernetes, Docker, and implementing CI/CD pipelines using GCP Cloud Build and other cloud-native services. Your familiarity with Infrastructure as Code (IaC) tools like Terraform, coupled with knowledge of security best practices and RBAC, sets you apart in this role. In this position, you will play a crucial role in defining, monitoring, and achieving Service Level Objectives (SLOs) and Service Level Agreements (SLAs). Your experience with source control tools like GitHub Enterprise and monitoring tools such as Grafana, Prometheus, Splunk, and GCP native logging solutions will be instrumental in maintaining the reliability of our cloud infrastructure. Moreover, your commitment to continuous improvement and automation of manual tasks, as well as your willingness to provide additional support when necessary, will be highly valued in our organization. Additionally, having experience in secrets management using tools like Hashi Corp Vault and knowledge of tracing tools like Google Tracing and Honeycomb are considered advantageous. If you are a dedicated professional with a passion for cloud infrastructure and a keen interest in ensuring the reliability and performance of cloud-based applications, we encourage you to apply for this full-time position. Join us in our journey towards innovation and excellence in cloud engineering. Benefits include health insurance, Provident Fund, and a work schedule aligned with US shift timings. The work location is in-person, with a hybrid work mode of 2 days in the office. We look forward to welcoming talented individuals who are ready to make a significant impact in the cloud engineering domain.,

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 22 Lacs

hyderabad, ahmedabad, bengaluru

Work from Office

We are seeking a highly skilled Senior Data Scientist / Conversational AI Architect to lead the design, development, and deployment of advanced conversational AI solutions using Google Cloud's Vertex AI and Dialogflow CX. This role requires deep expertise in building, testing, and productionizing voice and chat agents, with hands-on experience in creating complex flows, playbooks, and evaluation frameworks. You will be responsible for driving agent development from concept to production, ensuring high-quality user experiences across both voice and chat channels. A strong foundation in Python (Cloud Functions, Colab) and SQL (preferably BigQuery) is essential for building scalable solutions and performing data-driven evaluations. The ideal candidate will also possess excellent analytical skills to interpret user behavior and optimize agent performance. As a team lead, you will guide a team of developers and data scientists, manage stakeholder expectations, and ensure effective communication across business and technical teams. Familiarity with Agile program management methodologies is a plus. This role is best suited for someone with a blend of technical, leadership, and strategic thinking capabilities, able to translate complex requirements into impactful conversational AI solutions.

Posted 1 week ago

Apply

6.0 - 10.0 years

20 - 22 Lacs

indore, pune, chennai

Work from Office

We are seeking a highly skilled Senior Data Scientist / Conversational AI Architect to lead the design, development, and deployment of advanced conversational AI solutions using Google Cloud's Vertex AI and Dialogflow CX. This role requires deep expertise in building, testing, and productionizing voice and chat agents, with hands-on experience in creating complex flows, playbooks, and evaluation frameworks. You will be responsible for driving agent development from concept to production, ensuring high-quality user experiences across both voice and chat channels. A strong foundation in Python (Cloud Functions, Colab) and SQL (preferably BigQuery) is essential for building scalable solutions and performing data-driven evaluations. The ideal candidate will also possess excellent analytical skills to interpret user behavior and optimize agent performance. As a team lead, you will guide a team of developers and data scientists, manage stakeholder expectations, and ensure effective communication across business and technical teams. Familiarity with Agile program management methodologies is a plus. This role is best suited for someone with a blend of technical, leadership, and strategic thinking capabilities, able to translate complex requirements into impactful conversational AI solutions.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

This role is for a GCP Data Engineer who can build cloud analytics platforms to meet expanding business requirements with speed and quality using lean Agile practices. You will work on analysing and manipulating large datasets supporting the enterprise by activating data assets to support Enabling Platforms and Analytics in the GCP. You will be responsible for designing the transformation and modernization on GCP. Experience with large scale solutions and operationalizing of data warehouses, data lakes and analytics platforms on Google Cloud Platform or other cloud environment is a must. We are looking for candidates who have a broad set of technology skills across these areas and who can demonstrate an ability to design right solutions with appropriate combination of GCP and 3rd party technologies for deploying on the Google Cloud Platform. Responsibilities Develop technical solutions for Data Engineering and work between 1 PM and 10 PM IST to enable more overlap time with European and North American counterparts. This role will work closely with teams in US and as well as Europe to ensure robust, integrated migration aligned with Global Data Engineering patterns and standards. Design and deploying data pipelines with automated data lineage. Develop, reusable Data Engineering patterns. Design and build production data engineering solutions to deliver pipeline patterns using Google Cloud Platform (GCP) services: BigQuery, DataFlow, Pub/Sub, BigTable, Data Fusion, DataProc, Cloud Composer, Cloud SQL, Compute Engine, Cloud Functions, and App Engine. Ensure timely migration of Ford Credit Europe FCE Teradata warehouse to GCP and to enable Teradata platform decommissioning by end 2025 with a strong focus on ensuring continued, robust, and accurate Regulatory Reporting capability. Position Opportunities The Data Engineer role within FC Data Engineering supports the following opportunities for successful individuals: Key player in a high priority program to unlock the potential of Data Engineering Products and Services & secure operational resilience for Ford Credit Europe. Explore and implement leading edge technologies, tooling and software development best practices. Experience of managing data warehousing and product delivery within a financially regulated environment. Experience of collaborative development practices within an open-plan, team-designed environment. Experience of working with third party suppliers / supplier management. Continued personal and professional development with support and encouragement for further certification. Qualifications Essential: 5+ years of experience in data engineering, with a focus on data warehousing and ETL development (including data modelling, ETL processes, and data warehousing principles). 5+ years of SQL development experience. 3+ years of Cloud experience (GCP preferred) with solutions designed and implemented at production scale. Strong understanding of key GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, BigQuery, Dataflow, DataFusion, Dataproc, Cloud Build, AirFlow, and Pub/Sub, alongside and storage including Cloud Storage, Bigtable, Cloud Spanner. Excellent problem-solving skills, with the ability to design and optimize complex data pipelines. Strong communication and collaboration skills, capable of working effectively with both technical and non-technical stakeholders as part of a large global and diverse team. Experience developing with micro service architecture from container orchestration framework. Designing pipelines and architectures for data processing. Strong evidence of self-motivation to continuously develop own engineering skills and those of the team. Proven record of working autonomously in areas of high ambiguity, without day-to-day supervisory support. Evidence of a proactive mindset to problem solving and willingness to take the initiative. Strong prioritization, co-ordination, organizational and communication skills, and a proven ability to balance workload and competing demands to meet deadlines. Desired: Professional Certification in GCP (e.g., Professional Data Engineer). Data engineering or development experience gained in a regulated, financial environment. Experience with Teradata to GCP migrations is a plus. Strong expertise in SQL and experience with programming languages such as Python, Java, and/or Apache Beam. Experience of coaching and mentoring Data Engineers. Experience with data security, governance, and compliance best practices in the cloud. An understanding of current architecture standards and digital platform services strategy.,

Posted 1 week ago

Apply

8.0 - 12.0 years

20 - 35 Lacs

hyderabad

Hybrid

Job Overview: We are looking for a skilled and motivated Lead Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for requirements gathering, designing, architecting the solution, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) & ELT data pipelines. The role involves working with customers directly, gathering requirements, discovery phase, designing, architecting the solution, using various GCP services, implementing data transformations, data ingestion, data quality, and consistency across systems, and post post-delivery support. Experience Level: 10 to 12 years of relevant IT experience Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python. Architect the enterprise solutions with various technologies like Kafka, multi-cloud services, auto-scaling using GKE, Load balancers, APIGEE proxy API management, DBT, using LLMs as needed in the solution, redaction of sensitive information, DLP (Data Loss Prevention) etc. Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Should have experience in the following areas : API framework: Python FastAPI Processing engine: Apache Spark Messaging and streaming data processing: Kafka Storage: MongoDB, Redis/Bigtable Orchestration: Airflow Experience in deployments in GKE, Cloud Run. Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. Implement and enforce data quality checks, validation rules, and monitoring. Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: 8 to 12 years of hands-on experience in Python for backend or data engineering projects. Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). Solid understanding of data pipeline architecture, data integration, and transformation techniques. Experience in working with version control systems like GitHub and knowledge of CI/CD practices. Experience in Apache Spark, Kafka, Redis, Fast APIs, Airflow, GCP Composer DAGs. Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Experience in data migrations from on-premise data sources to Cloud platforms. Good to Have (Optional Skills): Experience working with the Snowflake cloud data platform. Hands-on knowledge of Databricks for big data processing and analytics. Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools. Additional Details: Excellent problem-solving and analytical skills. Strong communication skills and ability to collaborate in a team environment. Education : Bachelor's degree in Computer Science, a related field, or equivalent experience.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You are a Java Full Stack Developer with over 5 years of hands-on experience, and you have the opportunity to join our engineering team. Your role will involve working on Java-based backend development, modern front-end frameworks, and Google Cloud Platform (GCP) services to build scalable applications that support our digital transformation journey. Your responsibilities will include designing, developing, and maintaining Java-based microservices using Spring Boot. You will also be responsible for building interactive and responsive web applications using front-end frameworks like React.js or Angular, developing RESTful APIs, and integrating with third-party services. Additionally, you will leverage various GCP services such as Cloud Functions, Cloud Run, Pub/Sub, BigQuery, and Firestore for application development. Working with CI/CD tools for automated deployment, testing, and monitoring will be an essential part of your role. Collaboration with DevOps, QA, and Product teams in Agile sprints, writing unit, integration, and automated tests, ensuring application performance, security, and scalability, and participating in code reviews, documentation, and technical discussions are also key aspects of your job. The skills required for this position include at least 5 years of experience in Java/J2EE, Spring Boot, and microservices architecture. You should also possess strong front-end development skills with React.js, Angular, or Vue.js, solid experience with Google Cloud Platform (GCP) services such as Cloud Functions, Pub/Sub, BigQuery, Cloud Run, etc., and familiarity with SQL/NoSQL databases like PostgreSQL, Firestore, MongoDB. Proficiency in CI/CD pipelines (Jenkins, GitLab CI, etc.), version control systems (Git/GitHub), RESTful API development, and testing experience with JUnit, Mockito, Selenium, or similar frameworks is required to excel in this role.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a GCP Data Engineer specialized in Data Migration & Transformation, you will be responsible for designing and constructing robust, scalable data pipelines and architectures on Google Cloud Platform (GCP), particularly focusing on BigQuery. Your primary tasks will involve migrating and transforming large-scale data systems and datasets to GCP while emphasizing performance, scalability, and reliability. It will be crucial for you to automate data lineage extraction and ensure data integrity across various systems and platforms. Collaborating closely with architects and stakeholders, you will play a key role in implementing GCP-native and 3rd-party tools for data ingestion, integration, and transformation. Additionally, your role will include the development and optimization of complex SQL queries in BigQuery for data analysis and transformation. You will be expected to operationalize data pipelines using tools such as Apache Airflow (Cloud Composer), DataFlow, and Pub/Sub, enabling machine learning capabilities through well-structured, ML-friendly data pipelines. Participation in Agile processes and contributing to technical design discussions, code reviews, and documentation will be integral parts of your responsibilities. Your background should include at least 5 years of experience in Data Warehousing, Data Engineering, or similar roles, with a minimum of 2 years of hands-on experience working with GCP BigQuery. Proficiency in Python, SQL, Apache Airflow, and various GCP services including BigQuery, DataFlow, Cloud Composer, Pub/Sub, and Cloud Functions is essential. You should possess experience in data pipeline automation, data modeling, and building reusable data products. A solid understanding of data lineage, metadata integration, and data cataloging, preferably using tools like GCP Data Catalog and Informatica EDC, will be beneficial. Demonstrated ability to analyze complex datasets, derive actionable insights, and build/deploy analytics platforms on cloud environments, preferably GCP, is required. Preferred skills for this role include strong analytical and problem-solving capabilities, exposure to machine learning pipeline architecture and model deployment workflows, excellent communication skills, and the ability to collaborate effectively with cross-functional teams. Familiarity with Agile methodologies, DevOps best practices, a self-driven and innovative mindset, and experience in documenting complex data engineering systems and developing test plans will be advantageous for this position.,

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

pune, maharashtra, india

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes forour clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences foreach other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firms growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities: Cloud Data Engineer (AWS/Azure/Databricks/GCP) Experience :2-4 years in Data Engineering Job Description : We are seeking skilled and dynamic Cloud Data Engineers specializing in AWS, Azure, Databricks, and GCP. The ideal candidate will have a strong background in data engineering, with a focus on data ingestion, transformation, and warehousing. They should also possess excellent knowledge of PySpark or Spark, and a proven ability to optimize performance in Spark job executions. - Design, build, and maintain scalable data pipelines for a variety of cloud platforms including AWS, Azure, Databricks, and GCP. - Implement data ingestion and transformation processes to facilitate efficient data warehousing. - Utilize cloud services to enhance data processing capabilities: - AWS : Glue, Athena, Lambda, Redshift, Step Functions, DynamoDB, SNS. - Azure : Data Factory, Synapse Analytics, Functions, Cosmos DB, Event Grid, Logic Apps, Service Bus. - GCP : Dataflow, BigQuery, DataProc, Cloud Functions, Bigtable, Pub/Sub, Data Fusion. - Optimize Spark job performance to ensure high efficiency and reliability. - Stay proactive in learning and implementing new technologies to improve data processing frameworks. - Collaborate with cross-functional teams to deliver robust data solutions. - Work on Spark Streaming for real-time data processing as necessary. Qualifications: - 2-4 years of experience in data engineering with a strong focus on cloud environments. - Proficiency in PySpark or Spark is mandatory. - Proven experience with data ingestion, transformation, and data warehousing. - In-depth knowledge and hands-on experience with cloud services(AWS/Azure/GCP): - Demonstrated ability in performance optimization of Spark jobs. - Strong problem-solving skills and the ability to work independently as well as in a team. - Cloud Certification (AWS, Azure, or GCP) is a plus. - Familiarity with Spark Streaming is a bonus. Mandatory skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Preferred skill sets: Python, Pyspark, SQL with (AWS or Azure or GCP) Years of experience required: 2-4 years Education qualification: BE/BTECH, ME/MTECH, MBA, MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering, Bachelor of Technology Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills PySpark, Python (Programming Language), Structured Query Language (SQL) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Artificial Intelligence, Big Data, C++ Programming Language, Communication, Complex Data Analysis, Data-Driven Decision Making (DIDM), Data Engineering, Data Lake, Data Mining, Data Modeling, Data Pipeline, Data Quality, Data Science, Data Science Algorithms, Data Science Troubleshooting, Data Science Workflows, Deep Learning, Emotional Regulation, Empathy, Inclusion, Intellectual Curiosity, Machine Learning + 12 more Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship No Government Clearance Required No Job Posting End Date Show more Show less

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies