Home
Jobs

607 Dataflow Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

Role- Azure databricks engineer Location- Kolkata Experience 7+ Must-Have** Build the solution for optimal extraction, transformation, and loading of data from a wide variety of data sources using Azure data ingestion and transformation components. Following technology skills are required – Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience with ADF, Dataflow Experience with big data tools like Delta Lake, Azure Databricks Experience with Synapse Designing an Azure Data Solution skills Assemble large, complex data sets that meet functional / non-functional business requirements. Good-to-Have Working knowledge of Azure DevOps SN Responsibility of / Expectations from the Role 1 Customer Centric Work closely with client teams to understand project requirements and translate into technical design Experience working in scrum or with scrum teams Internal Collaboration Work with project teams and guide the end to end project lifecycle, resolve technical queries Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data needs. Soft Skills Good communication skills Ability to interact with various internal groups and CoEs Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Role**: Java SB MS Required Technical Skill Set: Java SB MS Desired Experience Range: 04 - 10 yrs Notice Period: Immediate to 30Days only Location of Requirement: Pune/Thivandrum We are currently planning to do a Virtual Interview on 11 th June 2025 (Wednesday) Date – 11 th June 2025 (Wednesday) Job Description: Desired Skills -Technical/Behavioral Primary Skill Java Springboot, GCP Services Secondary Skill Big Query, Jenkins, Dataflow, CloudSQL Show more Show less

Posted 1 week ago

Apply

55.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what’s next for their businesses. Your Role Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What You Will Love About Working Here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of €22.5 billion. Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Senior GCP Data Engineer Location: Chennai Experience: 8-15 Years Job Summary We are looking for a Senior GCP Data Engineer with deep expertise in Google Cloud Platform (GCP) services and modern data engineering tools. The ideal candidate will be responsible for designing, building, and optimizing scalable data pipelines and solutions across the GCP ecosystem. This role is ideal for someone who thrives in Agile environments, understands infrastructure-as-code, and can work closely with data scientists, analysts, and DevOps teams. Required Skills & Experience 8–15 years of overall experience in data engineering with at least 3+ years of strong hands-on experience with GCP. Expert-level programming in Python for data manipulation and orchestration. Hands-on experience with Google Cloud services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud Run, Cloud SQL Proficiency with Terraform for managing GCP infrastructure. Experience using DataForm or similar data modeling/orchestration tools. Strong knowledge of SQL and performance optimization techniques. Solid understanding of Agile software development practices and tools (e.g., JIRA, Confluence). Experience with real-time and batch data processing patterns. Show more Show less

Posted 1 week ago

Apply

0.0 - 10.0 years

0 Lacs

Indore, Madhya Pradesh

On-site

Indeed logo

Indore, Madhya Pradesh, India;Noida, Uttar Pradesh, India Qualification : Job Description: We are looking for GCP Data Engineer and SQL Programmer with good working experience on PostgreSQL, & PL/SQL programming experience and following technical skills PL/SQL and PostgreSQL programming – Ability to write complex SQL Queries, Stored Procedures. Migration – Working experience in migrating Database structure and data from Oracle to Postgres SQL preferably on GCP Alloy DB or Cloud SQL Working experience on Cloud SQL/Alloy DB Working experience to tune autovacuum in postgresql. Working experience on tuning Alloy DB / PostgreSQL for better performance. Working experience on Big Query, Fire Store, Memory Store, Spanner and bare metal setup for PostgreSQL Ability to tune the Alloy DB / Cloud SQL database for better performance Experience on GCP Data migration service Working experience on MongoDB Working experience on Cloud Dataflow Working experience on Database Disaster Recovery Working experience on Database Job scheduling Working experience on Database logging techniques Knowledge of OLTP And OLAP Desirable: GCP Database Engineer Certification Other Skills:- Out of the Box Thinking Problem Solving Skills Ability to make tech choices (build v/s buy) Performance management (profiling, benchmarking, testing, fixing) Enterprise Architecture Project management/Delivery Capabilty/ Quality Mindset Scope management Plan (phasing, critical path, risk identification) Schedule management / Estimations Leadership skills Other Soft Skills Learning ability Innovative / Initiative Skills Required : Postgresql, plsql, Bigquery Role : Develop, construct, test, and maintain data architectures Migrate Enterprise Oracle database from On Prem to GCP cloud autovacuum in postgresql Ability to tune autovacuum in postgresql. Working on tuning Alloy DB / PostgreSQL for better performance. Performance Tuning of PostgreSQL stored procedure code and queries Converting Oracle stored procedure & queries to PostgreSQL stored procedures & Queries Creating Hybrid data store with Datawarehouse and No SQL GCP solutions along with PostgreSQL. Migrate Oracle Table data from Oracle to Alloy DB Leading the database team Experience : 7 to 10 years Job Reference Number : 12779

Posted 1 week ago

Apply

16.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Experience- 16+years Location- Hybrid Job Description- We are seeking a dynamic and experienced Enterprise Solution Architect to lead the design and implementation of innovative solutions that align with our organization’s strategic objectives. The Enterprise Solution Architect will play a key role in defining the architecture vision, establishing technical standards, and driving the adoption of best practices across the enterprise. The ideal candidate will have a deep understanding of enterprise architecture principles, business processes, and technology trends, with a focus on delivering scalable, flexible, and secure solutions. Responsibilities Drive client conversations, solutions and build strong relationships with client, acting as a trusted advisor and technical expert. Experienced in laying down Architectural roadmap, guidelines and High Level Design covering E2E lifecycle of data value chain from ingestion, integration, consumption (visualization, AI capabilities), data governance and non-functionals (incl. data security) Experienced in delivering large scale data platform implementations for Telecom clients. Must have Telecom domain understanding. Experienced in implementation of data applications and platform on GCP. Execution of a comprehensive data migration strategy for our telecom client, involving multiple source systems to GCP. Deep dive into client requirements to understand their data needs and challenges. Proactively propose solutions that leverage GCP’s capabilities or integrate with external tools for optimal results. Spearhead solution calls with the client, translating complex data architecture and engineering concepts into clear, actionable plans for data engineers. Demonstrate flexibility and adaptability to accommodate evolving needs. Develop a robust data model for the telecom client, ensuring data is organized, consistent, and readily available for analysis. Leverage your expertise in Data, AI, and ML to create a future-proof blueprint for the client’s data landscape, enabling advanced analytics and insights generation. Develop architectural principles, standards, and guidelines to ensure consistency, interoperability, and scalability across systems and applications. Lead the design and implementation of end-to-end solutions that leverage emerging technologies and industry best practices to address business challenges and opportunities. Conduct architectural reviews and assessments to validate design decisions, identify risks, and recommend mitigation strategies. Collaborate with vendors, partners, and external consultants to evaluate and select technology solutions that meet business requirements and align with enterprise architecture standards. Drive the adoption of cloud computing, microservices architecture, API management, and other emerging technologies to enable digital transformation and innovation. Communicate the enterprise architecture vision, principles, and roadmap to stakeholders at all levels of the organization, and advocate for architectural decisions and investments. Qualifications Bachelor’s degree in Computer Science, Engineering, or a related field. Total experience of 18+ years on data analytics implementations. Minimum 10+ years of extensive experience as a Principal Solution Architect or similar senior role. Proven success in leading large-scale data migrations, particularly to GCP. In-depth knowledge of data architecture principles and best practices. Strong understanding of data modeling techniques and the ability to create efficient data models. Experience working with GCP and its various data management services (e.g., BigQuery, Cloud Storage, Dataflow, dbt). Experience with at least one programming language commonly used in data processing (e.g., Python, Java). A demonstrable understanding of Data Science, Artificial Intelligence, and Machine Learning concepts. Click here to apply Apply here Job Category: Solution Designer Job Type: Contract Job Location: hybrid Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. * Show more Show less

Posted 1 week ago

Apply

35.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Sutherland is a global leader in driving business and digital transformation, and exceptional experiences along the entire journey of our client’s engagement with their customers. With over 35 years of experience, we combine deep domain expertise and extensive knowledge in proven optimization with both proprietary and partnered tools and platforms to drive growth, efficiency, and productivity across organizations. Sutherland brings together our people, processes, products and platforms across cognitive artificial intelligence (AI), intelligent automation, advanced analytics and digital services to create unique solutions for the industries that we service. The core values of remaining agile, outside-the-box thinking, uncompromising integrity and flawless execution are key pillars of the company. We serve marque brands across Healthcare, Insurance, Banking and Financial Services, Communications, Media and Entertainment, Technology, Travel and Logistics and Retail. Sutherland has 212 unique and independent inventions associated with several patent grants in critical technologies in the US and UK. Job Description Role Overview: We are looking for a motivated Power BI Reporting / Analytics Specialist to join our team and help transform raw data into actionable insights within the context of SAP implementations. The ideal candidate will have 10+ years of experience in working with SAP, as well as hands-on experience in creating reports, dashboards, and analytics using Power BI. In this role, you will collaborate with SAP functional teams to gather data from various SAP modules and develop business intelligence solutions that empower data-driven decision-making. Key Responsibilities: Data Collection and Integration: Collaborate with SAP functional consultants and business stakeholders to gather and understand data requirements for reporting and analytics. Extract and integrate data from various SAP modules (e.g., SAP FICO, MM, SD, HR) to prepare datasets for reporting and analysis. Work with data engineering teams to ensure clean, accurate, and reliable data pipelines for Power BI reports. Power BI Report Development: Design, develop, and maintain interactive and visually appealing Power BI reports and dashboards based on business requirements. Create custom Power BI visualizations to present key metrics, KPIs, and trends derived from SAP data. Implement drill-down capabilities, dynamic filtering, and other Power BI features to enhance the user experience and provide more granular insights. Data Analysis & Insights: Perform data analysis on SAP data to identify key trends, anomalies, and business performance indicators. Work closely with business users to understand their analytical needs and provide actionable insights using Power BI. Provide ongoing analysis and reporting for continuous monitoring of business performance. Collaborate with SAP Functional Teams: Work closely with SAP functional consultants (e.g., SAP FICO, MM, SD) to ensure accurate extraction of relevant data from SAP systems. Assist in defining data models and ensuring that data from SAP is represented appropriately for reporting and analytics. Support functional teams in implementing data governance processes to ensure data integrity and consistency across reports. Report Optimization and Performance Tuning: Continuously optimize Power BI reports for performance, ensuring fast loading times and efficient data refreshes. Troubleshoot and resolve performance issues in reports or dashboards to maintain smooth user experiences. Implement best practices for report design, data model optimization, and visual consistency. User Support and Training: Provide training and support to end-users, ensuring they understand how to navigate Power BI reports and interpret the data. Create user manuals or documentation for Power BI reports and dashboards, ensuring that business users can independently generate insights. Assist with user feedback, ensuring reports meet their needs and making necessary adjustments. Continuous Improvement: Stay up-to-date with the latest features and capabilities of Power BI, and implement new functionalities to improve the reporting experience. Suggest improvements to existing reporting structures and processes to enhance reporting efficiency and accuracy. Required Skills & Qualifications: Experience: 10+ years of hands-on experience with Power BI, ideally with exposure to SAP data reporting and analytics. Technical Skills: Proficiency in Power BI Desktop, Power BI Service, and DAX (Data Analysis Expressions). Understanding of data extraction techniques (e.g., SAP HANA, SAP BW) and integration with Power BI. Familiarity with SAP modules (FICO, MM, SD, HR) and their data structures. Ability to design and implement effective data models and relationships in Power BI. Data Visualization: Strong skills in creating effective and visually compelling reports and dashboards, ensuring clarity of insights. SQL Skills: Knowledge of SQL for data extraction and transformation purposes. Analytical Skills: Strong analytical mindset, capable of identifying patterns and trends within data to provide actionable insights. Collaboration Skills: Ability to work cross-functionally with SAP teams, business users, and IT teams to ensure the success of reporting initiatives. Communication Skills: Strong verbal and written communication skills, with the ability to present complex data in a simple, user-friendly manner. Preferred Skills: SAP Experience: Exposure to SAP systems and understanding of how data flows within SAP (SAP FICO, MM, SD, etc.) is a plus. Power BI Certification: Certification in Power BI or other relevant BI tools. Data Warehousing Knowledge: Familiarity with data warehousing concepts and the integration of data sources into reporting tools. Advanced Power BI Features: Experience with advanced Power BI features like Power Query, Dataflow, custom visuals, and data transformations. Agile Methodology: Experience working in Agile/Scrum project environments. Additional Information All your information will be kept confidential according to EEO guidelines. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity.  Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Engineering, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity.  Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement scalable, efficient, and secure data pipelines on GCP, utilizing tools such as BigQuery, Dataflow, Dataproc, Pub/Sub, and Cloud Storage. Collaborate with cross-functional teams (data scientists, analysts, and software engineers) to understand business requirements and deliver actionable data solutions. Develop and maintain ETL/ELT processes to ingest, transform, and load data from various sources into GCP-based data warehouses. Build and manage data lakes and data marts on GCP to support analytics and business intelligence initiatives. Implement automated data quality checks, monitoring, and alerting systems to ensure data integrity.  Optimize and tune performance for large-scale data processing jobs in BigQuery, Dataflow, and other GCP tools. Create and maintain data pipelines to collect, clean, and transform data for analytics and machine learning purposes. Ensure data governance and compliance with organizational policies, including data security, privacy, and access controls. Stay up to date with new GCP services and features and make recommendations for improvements and new implementations. Mandatory Skill Sets GCP, Big query , Data Proc Preferred Skill Sets GCP, Big query , Data Proc, Airflow Years Of Experience Required 4-7 Education Qualification B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Business Administration, Master of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Good Clinical Practice (GCP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline, Data Quality, Data Transformation, Data Validation {+ 18 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

6.0 years

5 - 10 Lacs

Noida

On-site

Country/Region: IN Requisition ID: 26218 Work Model: Position Type: Salary Range: Location: INDIA - NOIDA- BIRLASOFT OFFICE Title: Technical Specialist-Data Engg Description: Area(s) of responsibility JD - Snowflake Developer 6 years of experience working with hands on development experience in DBT, Aptitude, Snowflake on Azure platform- Dataflow, Data Ingestion, Data Storage & Security Expertise in ETL tool DBT Design Data Integration (ETL) projects using the DBT Strong hands-on experience in build custom data models/semantic reporting layer in Snowflake to support customer reporting current platform requirements. Good to have experience in any other ETL tool Participate in the entire project lifecycle including design and development of ETL solutions Design data integration and conversion strategy, exception handing mechanism, data retention and archival strategy Ability to communicate platform features/development effectively to customer SME & Technical team.

Posted 1 week ago

Apply

6.0 years

0 Lacs

Greater Hyderabad Area

On-site

Linkedin logo

Area(s) of responsibility JD - Snowflake Developer 6 years of experience working with hands on development experience in DBT, Aptitude, Snowflake on Azure platform- Dataflow, Data Ingestion, Data Storage & Security Expertise in ETL tool DBT Design Data Integration (ETL) projects using the DBT Strong hands-on experience in build custom data models/semantic reporting layer in Snowflake to support customer reporting current platform requirements. Good to have experience in any other ETL tool Participate in the entire project lifecycle including design and development of ETL solutions Design data integration and conversion strategy, exception handing mechanism, data retention and archival strategy Ability to communicate platform features/development effectively to customer SME & Technical team. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Summary We are seeking a highly skilled and motivated GCP Data Engineering Manager to join our dynamic team. As a Data Engineering manager specializing in Google Cloud Platform (GCP), you will play a crucial role in designing, implementing, and maintaining scalable data pipelines and systems. You will leverage your expertise in Google Big Query, SQL, Python, and analytical skills to drive data-driven decision-making processes and support various business functions. About the Role Key Responsibilities: Data Pipeline Development: Design, develop, and maintain robust data pipelines using GCP services like Dataflow, Dataproc, ensuring high performance and scalability. Google Big Query Expertise: Utilize your hands-on experience with Google Big Query to manage and optimize data storage, retrieval, and processing. SQL Proficiency: Write and optimize complex SQL queries to transform and analyze large datasets, ensuring data accuracy and integrity. Python Programming: Develop and maintain Python scripts for data processing, automation, and integration with other systems and tools. Data Integration: Collaborate with data analysts, and other stakeholders to integrate data from various sources, ensuring seamless data flow and consistency. Data Quality and Governance: Implement data quality checks, validation processes, and governance frameworks to maintain high data standards. Performance Tuning: Monitor and optimize the performance of data pipelines, queries, and storage solutions to ensure efficient data processing. Documentation: Create comprehensive documentation for data pipelines, processes, and best practices to facilitate knowledge sharing and team collaboration. Minimum Qualifications: Proven experience (minimum 6 – 8 yrs) in Data Engineer, with significant hands-on experience in Google Cloud Platform (GCP) and Google Big Query. Proficiency in SQL for data transformation, analysis and performance optimization. Strong programming skills in Python, with experience in developing data processing scripts and automation. Proven analytical skills with the ability to interpret complex data and provide actionable insights. Excellent problem-solving abilities and attention to detail. Strong communication and collaboration skills, with the ability to work effectively in a team enviro Desired Skills : Experience with Google Analytics data and understanding of digital marketing data. Familiarity with other GCP services such as Cloud Storage, Dataflow, Pub/Sub, and Dataproc. Knowledge of data visualization tools such as Looker, Tableau, or Data Studio. Experience with machine learning frameworks and libraries. Why Novartis: Helping people with disease and their families takes more than innovative science. It takes a community of smart, passionate people like you. Collaborating, supporting and inspiring each other. Combining to achieve breakthroughs that change patients’ lives. Ready to create a brighter future together? https://www.novartis.com/about/strategy/people-and-culture Join our Novartis Network: Not the right Novartis role for you? Sign up to our talent community to stay connected and learn about suitable career opportunities as soon as they come up: https://talentnetwork.novartis.com/network Benefits and Rewards: Read our handbook to learn about all the ways we’ll help you thrive personally and professionally: https://www.novartis.com/careers/benefits-rewards Division US Business Unit Universal Hierarchy Node Location India Site Hyderabad (Office) Company / Legal Entity IN10 (FCRS = IN010) Novartis Healthcare Private Limited Functional Area Marketing Job Type Full time Employment Type Regular Shift Work No Accessibility and accommodation Novartis is committed to working with and providing reasonable accommodation to individuals with disabilities. If, because of a medical condition or disability, you need a reasonable accommodation for any part of the recruitment process, or in order to perform the essential functions of a position, please send an e-mail to [email protected] and let us know the nature of your request and your contact information. Please include the job requisition number in your message. Novartis is committed to building an outstanding, inclusive work environment and diverse teams' representative of the patients and communities we serve.

Posted 1 week ago

Apply

8.0 - 13.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are searching for a skilled Lead Data Engineer to enhance our energetic team. In the role of Lead Data Engineer, you will take charge of creating, building, and sustaining data integration solutions for our clientele. Your leadership will guide a team of engineers towards achieving high-quality, scalable, and efficient data integration solutions. This role presents a thrilling challenge for an experienced data integration expert who is enthusiastic about technology and excels in a rapid, evolving setting. Responsibilities Design, build, and sustain data integration solutions for clients Guide a team of engineers to guarantee high-quality, scalable, and efficient data integration solutions Collaborate with multidisciplinary teams to grasp business needs and devise appropriate data integration solutions Ensure the security, reliability, and efficiency of data integration solutions Create and update documentation, such as technical specifications, data flow diagrams, and data mappings Continuously update knowledge and skills related to the latest data integration methods and tools Requirements Bachelor's degree in Computer Science, Information Systems, or a related field 8-13 years of experience in data engineering, data integration, or a related field Proficiency in cloud-native or Spark-based ETL tools like AWS Glue, Azure Data Factory, or GCP Dataflow Strong knowledge of SQL for data querying and manipulation Background in Snowflake for cloud data warehousing Familiarity with at least one cloud platform such as AWS, Azure, or GCP Experience in leading a team of engineers on data integration projects Good verbal and written communication skills in English at a B2 level Nice to have Background in ETL using Python Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

FORD Requirement - Order Number: 34170-23 L PA Chennai - Contract - Position Title: Architect Senior Target Start Date: 01-JUL-2025 Original Duration: 334 Days Notice Period - Immediate Joiners / Serving upto 30 days Work Hours: 02:00 PM to 11:30 PM Standard Shift: Night Travel Required? N Travel %: 0 Division Position Description: Materials Management Platform (MMP) is a multi-year transformation initiative aimed at transforming Ford's Materials Requirement Planning & Inventory Management capabilities. This is part of a larger Industrial Systems IT Transformation effort. This position responsibility is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier Supply Chain, Supplier Collaboration Skills Required GCP, Data Architecture Skills Preferred Cloud Architecture Experience Required 8 to 12 years Experience Preferred Requires a bachelor’s or foreign equivalent degree in computer science, information technology or a technology related field 8 years of professional experience in: o Data engineering, data product development and software product launches o At least three of the following languages: Java, Python, Spark, Scala, SQL and experience performance tuning. 4 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: o Data warehouses like Google BigQuery. o Workflow orchestration tools like Airflow. o Relational Database Management System like MySQL, PostgreSQL, and SQL Server. o Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub o Microservices architecture to deliver large-scale real-time data processing application. o REST APIs for compute, storage, operations, and security. o DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker. o Project management tools like Atlassian JIRA Automotive experience is preferred Support in an onshore/offshore model is preferred Excellent at problem solving and prevention. Knowledge and practical experience of agile delivery Education Required Bachelor's Degree Education Preferred Certification Program Additional Safety Training/Licensing/Personal Protection Requirements Additional Information : Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Big Query, Google Cloud Storage, Cloud SQL, Memory Store, Dataflow, Dataproc, Artifact Registry, Cloud Build, Cloud Run, Vertex AI, Pub/Sub, GCP APIs. Build ETL pipelines to ingest the data from heterogeneous sources into our system Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Implement version control and CI/CD practices for data engineering workflows to ensure reliable and efficient deployments. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures Troubleshoot and resolve issues related to data processing, storage, and retrieval. Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle Implement security measures and data governance policies to ensure the integrity and confidentiality of data Collaborate with stakeholders to gather and define data requirements, ensuring alignment with business objectives. Develop and maintain documentation for data engineering processes, ensuring knowledge transfer and ease of system maintenance. Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems. Provide mentorship and guidance to junior team members, fostering a collaborative and knowledge-sharing environment. Skills: airflow,data warehouses,cloud architecture,rdbms,spark,postgresql,real-time data streaming,microservices architecture,gcp,python,tekton,cloud,java,data architecture,terraform,management,rest apis,git,data,docker,github actions,sql,google bigquery,atlassian jira,sql server,workflow orchestration,scala,devops tools,github,apache kafka,mysql,gcp pub/sub Show more Show less

Posted 1 week ago

Apply

8.0 - 10.0 years

20 - 30 Lacs

Chennai

Hybrid

Naukri logo

Role & responsibilities GCP Services - Biq Query, Data Flow, Dataproc, DataPlex, DataFusion, Terraform, Tekton, Cloud SQL, Redis Memory, Airflow, Cloud Storage 2+ Years in Data Transfer Utilities 2+ Years in Git / any other version control tool 2+ Years in Confluent Kafka 1+ Years of Experience in API Development 2+ Years in Agile Framework 4+ years of strong experience in python, Pyspark development. 4+ years of shell scripting to develop the adhoc jobs for data importing/exporting Preferred candidate profile Python, dataflow, Dataproc, GCP Cloud Run, DataForm, Agile Software Development, Big Query, TERRAFORM, Data Fusion, Cloud SQL, GCP, KAFKA,Java. We would like to inform you that only immediate joiners will be considered for this position due to project urgency.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Career Opportunity with Burckhardt Compression Role We are seeking motivated and experienced professional who can effectively contribute to the role deliverables connected with position below. In this position you can actively participate to our growth and make a significant impact in a fast-paced environment as: Position: Data Engineer . Location: Pune. Your Contributions To Organisation's Growth Maintain & develop data platforms based on Microsoft Fabric for Business Intelligence & Databricks for real-time data analytics. Design, implement and maintain standardized production-grade data pipelines using modern data transformation processes and workflows for SAP, MS Dynamics, on-premise or cloud. Develop an enterprise-scale cloud-based Data Lake for business intelligence solutions. Translate business and customer needs into data collection, preparation and processing requirements. Optimize the performance of algorithms developed by Data Scientists. General administration and monitoring of the data platforms. Competencies working with structured & unstructured data. experienced in various database technologies (RDBMS, OLAP, Timeseries, etc.). solid programming skills (Python, SQL, Scala is a plus). experience in Microsoft Fabric (incl. Warehouse, Lakehouse, Data Factory, DataFlow Gen2, Semantic Model) and/or Databricks (Spark). proficient in PowerBI. experienced working with APIs. proficient in security best practices. data centered Azure know-how is a plus (Storage, Networking, Security, Billing). Expertise you have to bring in along with; Bachelor or Master degree in business informatics, computer science, or equal. A background in software engineering (e.g., agile programming, project organization) and experience with human centered design would be desirable. Extensive experience in handling large data sets. Experience working at least 5 years as a data engineer, preferably in an industrial company. Analytical problem-solving skills and the ability to assimilate complex information. Programming experience in modern data-oriented languages (SQL, Python). Experience with Apache Spark and DevOps. Proven ability to synthesize complex data advanced technical skills related to data modelling, data mining, database design and performance tuning. English language proficiency. Special Requirements High quality mindset paired with strong customer orientation, critical thinking, and attention to detail. Understanding of data processing at scale Influence without authority. Willingness to acquire additional system/technical knowledge as needed. Problem solver. Experience to work in an international organization and in multi-cultural teams. Proactive, creative and innovative. We Offer We have a very free culture, inspiring employees to involve in various activities of their interests. Our flexible working models will allow you to combine private interests with work. Employee Connect, Engagement events and feedback culture enhances our reach and gives us an opportunity to continuously improve. Performance and appreciation awards. Sports activities and Klib Library to energize you. We proudly do encourage diversity and inclusion in thoughts and in spirit. A winner of GreenCo Gold and other various ISO certifications, we encourage you to inhibit the same to contribute in a much greener tomorrow! We do aspire to be Great Place to Work soon to provide you an enticing career with us. HR Team Burckhardt Compression India Show more Show less

Posted 1 week ago

Apply

3.0 - 5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Vertex AI Developer Experience: 3 - 5 Years Location: Chennai / Hyderabad Notice Period: Immediate Joiners Preferred Employment Type: Full-Time Job Description We are looking for a passionate and skilled Vertex AI Developer with hands-on experience in Google Cloud’s Vertex AI , Python , Machine Learning (ML) , and Generative AI . The ideal candidate will play a key role in designing, developing, and deploying scalable ML/GenAI models and workflows using GCP Vertex AI services. Key Responsibilities Develop, deploy, and manage ML/GenAI models using Vertex AI on Google Cloud Platform (GCP). Work with structured and unstructured data to create and train predictive and generative models. Integrate AI models into scalable applications using Python APIs and GCP components. Collaborate with data scientists, ML engineers, and DevOps teams to implement end-to-end ML pipelines. Monitor model performance and iterate on improvements as necessary. Document solutions, best practices, and technical decisions. Mandatory Skills 3 to 5 years of experience in Machine Learning/AI Development. Strong proficiency in Python and ML libraries such as TensorFlow, PyTorch, Scikit-learn. Hands-on experience with Vertex AI including AutoML, Pipelines, Model Deployment, and Monitoring. Experience in GenAI frameworks (e.g., PaLM, LangChain, LLMOps). Proficiency in using Google Cloud Platform tools and services. Strong understanding of MLOps, CI/CD, and model lifecycle management. Preferred Skills Experience with containerization tools like Docker, orchestration tools like Kubernetes. Exposure to Natural Language Processing (NLP) and LLMs. Familiarity with data engineering concepts (BigQuery, Dataflow). Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: Design, develop, test, and maintain scalable ETL data pipelines using Python Work extensively on Google Cloud Platform (GCP) services such as: Dataflow for real-time and batch data processing Cloud Functions for lightweight serverless compute BigQuery for data warehousing and analytics Cloud Composer for orchestration of data workflows (based on Apache Airflow) Google Cloud Storage (GCS) for managing data at scale IAM for access control and security Cloud Run for containerized applications Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery Implement and enforce data quality checks, validation rules, and monitoring Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL Document pipeline designs, data flow diagrams, and operational support procedures Required Skills: 4–6 years of hands-on experience in Python for backend or data engineering projects Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.) Solid understanding of data pipeline architecture, data integration, and transformation techniques Experience in working with version control systems like GitHub and knowledge of CI/CD practices Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.) Good to Have (Optional Skills): Experience working with Snowflake cloud data platform Hands-on knowledge of Databricks for big data processing and analytics Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Associate Director, Data and Analytics In this role, you will: Engineer the data transformations and analysis for the Cash Equities Trading platform. Technology SME on the real-time stream processing paradigm. Bring your experience in Low latency, High through-put, auto scaling platform design and implementation. Implementing an end-to-end platform service, assessing the operations and non-functional needs clearly. Drive and document technical and functional decisions with appropriate diligence. Provide operational support and manage incidents. Requirements To be successful in this role, you should meet the following requirements: 10+ years of experience in data engineering technology and tools. Preferred having experience with Java / Scala based implementations for enterprise-wide platforms. Experience with Apache Beam, Google Dataflow, Apache Kafka for real-time steam processing technology stack. Complex state-full processing of events with partitioning for higher throughputs. Have dealt with fine-tuning the through-puts and improving the performance aspects on data pipelines. Experience with analytical data store optimizations, querying and managing them. Experience with alternate data engineering tools (Apache Flink, Apache Spark etc) Automated CI/CD or operations concerns on the engineering platforms. Interpreting problems from functional context and transforming them into technology solutions. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In data analysis at PwC, you will focus on utilising advanced analytical techniques to extract insights from large datasets and drive data-driven decision-making. You will leverage skills in data manipulation, visualisation, and statistical modelling to support clients in solving complex business problems. Years of Experience: Candidates with 4+ years of hands on experience Position: Senior Associate Industry: Supply Chain/Forecasting/Financial Analytics Required Skills: Successful candidates will have demonstrated the following skills and characteristics: Must Have Strong supply chain domain knowledge (inventory planning, demand forecasting, logistics) Well versed and hands-on experience of working on optimization methods like linear programming, mixed integer programming, scheduling optimization. Having understanding of working on third party optimization solvers like Gurobi will be an added advantage Proficiency in forecasting techniques (e.g., Holt-Winters, ARIMA, ARIMAX, SARIMA, SARIMAX, FBProphet, NBeats) and machine learning techniques (supervised and unsupervised) Experience using at least one major cloud platform (AWS, Azure, GCP), such as: AWS: Experience with AWS SageMaker, Redshift, Glue, Lambda, QuickSight Azure: Experience with Azure ML Studio, Synapse Analytics, Data Factory, Power BI GCP: Experience with BigQuery, Vertex AI, Dataflow, Cloud Composer, Looker Experience developing, deploying, and monitoring ML models on cloud infrastructure Expertise in Python, SQL, data orchestration, and cloud-native data tools Hands-on experience with cloud-native data lakes and lakehouses (e.g., Delta Lake, BigLake) Familiarity with infrastructure-as-code (Terraform/CDK) for cloud provisioning Knowledge of visualization tools (PowerBI, Tableau, Looker) integrated with cloud backends Strong command of statistical modeling, testing, and inference Advanced capabilities in data wrangling, transformation, and feature engineering Familiarity with MLOps, containerization (Docker, Kubernetes), and orchestration tools (e.g., Airflow) Strong communication and stakeholder engagement skills at the executive level Roles And Responsibilities Assist analytics projects within the supply chain domain, driving design, development, and delivery of data science solutions Develop and execute on project & analysis plans under the guidance of Project Manager Interact with and advise consultants/clients in US as a subject matter expert to formalize data sources to be used, datasets to be acquired, data & use case clarifications that are needed to get a strong hold on data and the business problem to be solved Drive and Conduct analysis using advanced analytics tools and coach the junior team members Implement necessary quality control measures in place to ensure the deliverable integrity like data quality, model robustness, and explainability for deployments. Validate analysis outcomes, recommendations with all stakeholders including the client team Build storylines and make presentations to the client team and/or PwC project leadership team Contribute to the knowledge and firm building activities Professional And Educational Background BE / B.Tech / MCA / M.Sc / M.E / M.Tech /Master’s Degree /MBA from reputed institute Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics focus on leveraging data to drive insights and make informed business decisions. They utilise advanced analytics techniques to help clients optimise their operations and achieve their strategic goals. In business intelligence at PwC, you will focus on leveraging data and analytics to provide strategic insights and drive informed decision-making for clients. You will develop and implement innovative solutions to optimise business performance and enhance competitive advantage. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Job Description : Analyses current business practices, processes, and procedures as well as identifying future business opportunities for leveraging Microsoft Azure Data & Analytics Services. Provide technical leadership and thought leadership as a senior member of the Analytics Practice in areas such as data access & ingestion, data processing, data integration, data modeling, database design & implementation, data visualization, and advanced analytics. Engage and collaborate with customers to understand business requirements/use cases and translate them into detailed technical specifications. Develop best practices including reusable code, libraries, patterns, and consumable frameworks for cloud-based data warehousing and ETL. Maintain best practice standards for the development or cloud-based data warehouse solutioning including naming standards. Designing and implementing highly performant data pipelines from multiple sources using Apache Spark and/or Azure Databricks Integrating the end-to-end data pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained Working with other members of the project team to support delivery of additional project components (API interfaces) Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Integrate Databricks with other technologies (Ingestion tools, Visualization tools). Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Direct experience of building data pipelines using Azure Data Factory and Apache Spark (preferably in Databricks). Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake, Azure Cosmos DB, Azure Stream Analytics Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is essential. Thorough understanding of Azure Cloud Infrastructure offerings. Strong experience in common data warehouse modeling principles including Kimball. Working knowledge of Python is desirable Experience developing security models. Databricks & Azure Big Data Architecture Certification would be plus Mandatory Skill Sets ADE, ADB, ADF Preferred Skill Sets ADE, ADB, ADF Years Of Experience Required 3-7 Years Education Qualification BE, B.Tech, MCA, M.Tech Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Data Engineering, GCP Dataflow Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Business Case Development, Business Data Analytics, Business Intelligence and Reporting Tools (BIRT), Business Intelligence Development Studio, Communication, Competitive Advantage, Continuous Process Improvement, Data Analysis and Interpretation, Data Architecture, Database Management System (DBMS), Data Collection, Data Pipeline, Data Quality, Data Science, Data Visualization, Emotional Regulation, Empathy, Inclusion, Industry Trend Analysis, Intellectual Curiosity, Java (Programming Language), Market Development {+ 7 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date Show more Show less

Posted 1 week ago

Apply

100.0 years

1 - 3 Lacs

Gurgaon

On-site

Senior Data Engineer(GCP, Python) Gurgaon, India Information Technology 314204 Job Description About The Role: Grade Level (for internal use): 10 S&P Global Mobility The Role: Senior Data Engineer Department overview Automotive Insights at S&P Mobility, leverages technology and data science to provide unique insights, forecasts and advisory services spanning every major market and the entire automotive value chain—from product planning to marketing, sales and the aftermarket. We provide the most comprehensive data spanning the entire automotive lifecycle—past, present and future. With over 100 years of history, unmatched credentials, and the largest base of customers than any other provider, we are the industry benchmark for clients around the world, helping them make informed decisions to capitalize on opportunity and avoid risk. Our solutions are used by nearly every major OEM, 90% of the top 100 tier one suppliers, media agencies, governments, insurance companies, and financial stakeholders to provide actionable insights that enable better decisions and better results. Position summary S&P Global is seeking an experienced and driven Senior data Engineer who is passionate about delivering high-value, high-impact solutions to the world’s most demanding, high-profile clients. The ideal candidate must have at least 5 years of experience in developing and deploying data pipelines on Google Cloud Platform (GCP). They should be passionate about building high-quality, reusable pipelines using cutting-edge technologies. This role involves designing, building, and maintaining scalable data pipelines, optimizing workflows, and ensuring data integrity across multiple systems. The candidate will collaborate with data scientists, analysts, and software engineers to develop robust and efficient data solutions. Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines. Optimize and automate data ingestion, transformation, and storage processes. Work with structured and unstructured data sources, ensuring data quality and consistency. Develop and maintain data models, warehouses, and databases. Collaborate with cross-functional teams to support data-driven decision-making. Ensure data security, privacy, and compliance with industry standards. Troubleshoot and resolve data-related issues in a timely manner. Monitor and improve system performance, reliability, and scalability. Stay up-to-date with emerging data technologies and recommend improvements to our data architecture and engineering practices. What you will need: Strong programming skills using python. 5+ years of experience in data engineering, ETL development, or a related role. Proficiency in SQL and experience with relational (PostgreSQL, MySQL, etc.) and NoSQL (DynamoDB, MongoDB etc…) databases. Proficiency building data pipelines in Google cloud platform(GCP) using services like DataFlow, Cloud Batch, BigQuery, BigTable, Cloud functions, Cloud Workflows, Cloud Composer etc.. Strong understanding of data modeling, data warehousing, and data governance principles. Should be capable of mentoring junior data engineers and assisting them with technical challenges. Familiarity with orchestration tools like Apache Airflow. Familiarity with containerization and orchestration. Experience with version control systems (Git) and CI/CD pipelines. Excellent problem-solving skills and ability to work in a fast-paced environment. Excellent communication skills. Hands-on experience with snowflake is a plus. Experience with big data technologies is a plus. Experience in AWS is a plus. Should be able to convert business queries into technical documentation. Education and Experience Bachelor’s degree in Computer Science, Information Systems, Information Technology, or a similar major or Certified Development Program 5+ years of experience building data pipelines using python & GCP (Google Cloud platform). About Company Statement: S&P Global delivers essential intelligence that powers decision making. We provide the world’s leading organizations with the right data, connected technologies and expertise they need to move ahead. As part of our team, you’ll help solve complex challenges that equip businesses, governments and individuals with the knowledge to adapt to a changing economic landscape. S&P Global Mobility turns invaluable insights captured from automotive data to help our clients understand today’s market, reach more customers, and shape the future of automotive mobility. About S&P Global Mobility At S&P Global Mobility, we provide invaluable insights derived from unmatched automotive data, enabling our customers to anticipate change and make decisions with conviction. Our expertise helps them to optimize their businesses, reach the right consumers, and shape the future of mobility. We open the door to automotive innovation, revealing the buying patterns of today and helping customers plan for the emerging technologies of tomorrow. For more information, visit www.spglobal.com/mobility. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 314204 Posted On: 2025-05-30 Location: Gurgaon, Haryana, India

Posted 1 week ago

Apply

4.0 - 6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Experience Level: 4 to 6 years of relevant IT experience Job Overview: We are looking for a skilled and motivated Data Engineer with strong experience in Python programming and Google Cloud Platform (GCP) to join our data engineering team. The ideal candidate will be responsible for designing, developing, and maintaining robust and scalable ETL (Extract, Transform, Load) data pipelines. The role involves working with various GCP services, implementing data ingestion and transformation logic, and ensuring data quality and consistency across systems. Key Responsibilities: ● Design, develop, test, and maintain scalable ETL data pipelines using Python. ● Work extensively on Google Cloud Platform (GCP) services such as: ○ Dataflow for real-time and batch data processing ○ Cloud Functions for lightweight serverless compute ○ BigQuery for data warehousing and analytics ○ Cloud Composer for orchestration of data workflows (based on Apache Airflow) ○ Google Cloud Storage (GCS) for managing data at scale ○ IAM for access control and security ○ Cloud Run for containerized applications ● Perform data ingestion from various sources and apply transformation and cleansing logic to ensure high-quality data delivery. ● Implement and enforce data quality checks, validation rules, and monitoring. ● Collaborate with data scientists, analysts, and other engineering teams to understand data needs and deliver efficient data solutions. ● Manage version control using GitHub and participate in CI/CD pipeline deployments for data projects. ● Write complex SQL queries for data extraction and validation from relational databases such as SQL Server, Oracle, or PostgreSQL. ● Document pipeline designs, data flow diagrams, and operational support procedures. Required Skills: ● 4–6 years of hands-on experience in Python for backend or data engineering projects. ● Strong understanding and working experience with GCP cloud services (especially Dataflow, BigQuery, Cloud Functions, Cloud Composer, etc.). ● Solid understanding of data pipeline architecture, data integration, and transformation techniques. ● Experience in working with version control systems like GitHub and knowledge of CI/CD practices. ● Strong experience in SQL with at least one enterprise database (SQL Server, Oracle, PostgreSQL, etc.). Good to Have (Optional Skills): ● Experience working with Snowflake cloud data platform. ● Hands-on knowledge of Databricks for big data processing and analytics. ● Familiarity with Azure Data Factory (ADF) and other Azure data engineering tools. Additional Details: ● Excellent problem-solving and analytical skills. ● Strong communication skills and ability to collaborate in a team environment. Show more Show less

Posted 1 week ago

Apply

Exploring Dataflow Jobs in India

The dataflow job market in India is currently experiencing a surge in demand for skilled professionals. With the increasing reliance on data-driven decision-making in various industries, the need for individuals proficient in managing and analyzing dataflow is on the rise. This article aims to provide job seekers with valuable insights into the dataflow job landscape in India.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Delhi

These cities are known for their thriving tech ecosystems and are home to numerous companies actively hiring for dataflow roles.

Average Salary Range

The average salary range for dataflow professionals in India varies based on experience levels. Entry-level positions can expect to earn between INR 4-6 lakhs per annum, while experienced professionals can command salaries upwards of INR 12-15 lakhs per annum.

Career Path

In the dataflow domain, a typical career path may involve starting as a Junior Data Analyst or Data Engineer, progressing to roles such as Senior Data Scientist or Data Architect, and eventually reaching positions like Tech Lead or Data Science Manager.

Related Skills

In addition to expertise in dataflow tools and technologies, dataflow professionals are often expected to have proficiency in programming languages such as Python or R, knowledge of databases like SQL, and familiarity with data visualization tools like Tableau or Power BI.

Interview Questions

  • What is dataflow and how is it different from data streaming? (basic)
  • Explain the difference between batch processing and real-time processing. (medium)
  • How do you handle missing or null values in a dataset? (basic)
  • Can you explain the concept of data lineage? (medium)
  • What is the importance of data quality in dataflow processes? (basic)
  • How do you optimize dataflow pipelines for performance? (medium)
  • Describe a time when you had to troubleshoot a dataflow issue. (medium)
  • What are some common challenges faced in dataflow projects? (medium)
  • How do you ensure data security and compliance in dataflow processes? (medium)
  • What are the key components of a dataflow architecture? (medium)
  • Explain the concept of data partitioning in dataflow. (advanced)
  • How would you handle a sudden increase in data volume in a dataflow pipeline? (advanced)
  • What role does data governance play in dataflow processes? (medium)
  • Can you discuss the advantages and disadvantages of using cloud-based dataflow solutions? (medium)
  • How do you stay updated with the latest trends and technologies in dataflow? (basic)
  • What is the significance of metadata in dataflow management? (medium)
  • Walk us through a dataflow project you have worked on from start to finish. (medium)
  • How do you ensure data quality and consistency across different data sources in a dataflow pipeline? (medium)
  • What are some best practices for monitoring and troubleshooting dataflow pipelines? (medium)
  • How do you handle data transformations and aggregations in a dataflow process? (basic)
  • What are the key performance indicators you would track in a dataflow project? (medium)
  • How do you collaborate with cross-functional teams in a dataflow project? (basic)
  • Can you explain the concept of data replication in dataflow management? (advanced)
  • How do you approach data modeling in a dataflow project? (medium)
  • Describe a challenging dataflow problem you encountered and how you resolved it. (advanced)

Closing Remark

As you navigate the dataflow job market in India, remember to showcase your skills and experiences confidently during interviews. Stay updated with the latest trends in dataflow and continuously upskill to stand out in a competitive job market. Best of luck in your job search journey!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies