1586 Data Pipeline Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

9 - 13 Lacs

Noida

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solu...

Posted 5 months ago

AI Match Score
Apply

4.0 - 9.0 years

18 - 32 Lacs

Noida, Kolkata, Pune

Work from Office

Description - External Genpact (NYSE: G) is a global professional services and solutions firm delivering outcomes that shape the future. Our 125,000+ people across 30+ countries are driven by our innate curiosity, entrepreneurial agility, and desire to create lasting value for clients. Powered by our purpose the relentless pursuit of a world that works better for people we serve and transform leading enterprises, including the Fortune Global 500, with our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. Inviting applications for the role of AIML Engineer! In this role, we are looking for candidates who have relevant years of experi...

Posted 5 months ago

AI Match Score
Apply

1.0 - 7.0 years

3 - 9 Lacs

Pune

Work from Office

Required Skills and Qualifications- Bachelor degree in Computer Science, Information Technology, or a related field. Hands on experience in data pipeline testing, preferably in a cloud environment. Strong experience with Google Cloud Platform services, especially BigQuery Proficient in working with Kafka, Hive, Parquet files, and Snowflake. Expertise in Data Quality Testing and metrics calculations for both batch and streaming data. Excellent programming skills in Python and experience with test automation. Strong analytical and problem-solving abilities. Excellent communication and teamwork skills.

Posted 5 months ago

AI Match Score
Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 5 months ago

AI Match Score
Apply

6.0 - 10.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Role Description: As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities : Data Architecture and Design : Design and implement scala...

Posted 5 months ago

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

Ahmedabad, Remote

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 5 months ago

AI Match Score
Apply

4.0 - 9.0 years

8 - 13 Lacs

Pune, Anywhere in /Multiple Locations

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks.- Architect scalable and efficient data models and sto...

Posted 5 months ago

AI Match Score
Apply

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, E...

Posted 5 months ago

AI Match Score
Apply

3.0 - 6.0 years

9 - 13 Lacs

Ahmedabad

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solu...

Posted 5 months ago

AI Match Score
Apply

7.0 - 10.0 years

9 - 12 Lacs

Pune

Work from Office

About the Job : We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic team. In this pivotal role, you will be instrumental in driving our data engineering initiatives, with a strong emphasis on leveraging Dataiku's capabilities to enhance data processing and analytics. You will be responsible for designing, developing, and optimizing robust data pipelines, ensuring seamless integration of diverse data sources, and maintaining high data quality and accessibility to support our business intelligence and advanced analytics projects. This role requires a unique blend of expertise in traditional data engineering principles, advanced data modeling, and a forward-...

Posted 5 months ago

AI Match Score
Apply

4.0 - 9.0 years

6 - 11 Lacs

Ahmedabad

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and st...

Posted 5 months ago

AI Match Score
Apply

3.0 - 6.0 years

9 - 13 Lacs

Pune

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solu...

Posted 5 months ago

AI Match Score
Apply

6.0 - 9.0 years

9 - 13 Lacs

Ahmedabad

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundat...

Posted 5 months ago

AI Match Score
Apply

7.0 - 12.0 years

19 - 25 Lacs

Bengaluru

Remote

Hi Candidates, we have job openings in one of our MNC Company Interested candidates can apply here and share details to chandrakala.c@i-q.co Note: NP-0-15 days only serving Role & responsibilities Key Responsibilities Complete Data Modelling Tasks o Initiate and manage Gap Analysis and Source-to-Target Mapping Exercises. o Gain a comprehensive understanding of the EA extract. o Map the SAP source used in EA extracts to the AWS Transform Zone, AWS Conform Zone, and AWS Enrich Zone. Develop a matrix view of all Excel/Tableau reports to identify any missing fields or tables from SAP in the Transform Zone. Engage with SMEs to finalize the Data Model (DM). Obtain email confirmation and approval f...

Posted 5 months ago

AI Match Score
Apply

6.0 - 8.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Role Description: As a Data Engineering Lead, you will play a crucial role in overseeing the design, development, and maintenance of our organization's data architecture and infrastructure. You will be responsible for designing and developing the architecture for the data platform that ensures the efficient and effective processing of large volumes of data, enabling the business to make informed decisions based on reliable and high-quality data. The ideal candidate will have a strong background in data engineering, excellent leadership skills, and a proven track record of successfully managing complex data projects. Responsibilities : Data Architecture and Design: Design and implement scalab...

Posted 5 months ago

AI Match Score
Apply

0.0 - 5.0 years

3 - 8 Lacs

Pune

Work from Office

Job Title: Data Engineer Location: Pune Experience: Fresher 5 years Employment Type: Full-Time About the Role: We are looking for talented Data Engineers to join our expanding team. You will be responsible for integrating, managing, and optimizing data systems thus contributing to innovative and challenging projects. This is a great opportunity to enhance your technical skills and become a part of a successful team. Key Responsibilities: Develop, customize, and manage data integration tools, databases (MySQL), and data warehouses using Python, Java, and other relevant technologies. Write and execute complex queries and automation scripts for processing operational data. Collaborate with the ...

Posted 5 months ago

AI Match Score
Apply

3.0 - 5.0 years

2 - 3 Lacs

Kolkata

Work from Office

Qualification BCA. MCA preferable Required Skill Set 5+ years in Data Engineering, with at least 2 years on GCP/BigQuery Strong Python and SQL expertise (Airflow, dbt or similar) Deep understanding of ETL patterns, change-data-capture, and data-quality frameworks Experience with IoT or time-series data pipelines a plus Excellent communication skills and track record of leading cross-functional teams Job Description / Responsibilities Design, build, and maintain scalable ETL/ELT pipelines in Airflow and BigQuery Define and enforce data-modeling standards, naming conventions, and testing frameworks Develop and review core transformations: IoT enrichment (batch-ID assignment, stage tagging) Tra...

Posted 5 months ago

AI Match Score
Apply

6.0 - 10.0 years

0 Lacs

hyderabad, telangana

On-site

The successful candidate for the Full Stack Developer position at U.S. Pharmacopeial Convention (USP) will have a demonstrated understanding of the organization's mission and a commitment to excellence through inclusive and equitable behaviors and practices. They should possess the ability to quickly build credibility with stakeholders. As a Full Stack Developer, you will be part of the Digital & Innovation group at USP, responsible for building innovative digital products using cutting-edge cloud technologies. Your role will be crucial in creating an amazing digital experience for customers. Your responsibilities will include building scalable applications and platforms using the latest clo...

Posted 5 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As a Data Engineer2 at GoKwik, you will have the opportunity to closely collaborate with product managers, data scientists, business intelligence teams, and SDEs to develop and implement data-driven strategies. Your role will involve identifying, designing, and executing process improvements to enhance data models, architectures, pipelines, and applications. You will play a vital role in continuously optimizing data processes, overseeing data management, governance, security, and analysis to ensure data quality and security across all product verticals. Additionally, you will design, create, and deploy new data models and pipelines as necessary to achieve high performance, operational excell...

Posted 5 months ago

AI Match Score
Apply

0.0 - 4.0 years

0 Lacs

karnataka

On-site

We are looking for someone who is enthusiastic to contribute to the implementation of a metadata-driven platform managing the full lifecycle of batch and streaming Big Data pipelines. This role involves applying ML and AI techniques in data management, such as anomaly detection for identifying and resolving data quality issues and data discovery. The platform facilitates the delivery of Visa's core data assets to both internal and external customers. You will provide Platform-as-a-Service offerings that are easy to consume, scalable, secure, and reliable using open source-based Cloud solutions for Big Data technologies. Working at the intersection of infrastructure and software engineering, ...

Posted 5 months ago

AI Match Score
Apply

3.0 - 5.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Position summary: We are seeking a Senior Software Development Engineer – Data Engineering with 3-5 years of experience to design, develop, and optimize data pipelines and analytics workflows using Snowflake, Databricks, and Apache Spark. The ideal candidate will have a strong background in big data processing, cloud data platforms, and performance optimization to enable scalable data-driven solutions. Key Responsibilities: Work with cloud-based data solutions (Azure, AWS, GCP). Implement data modeling and warehousing solutions. Developing and maintaining data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Designing and optimizing data storage solutions...

Posted 5 months ago

AI Match Score
Apply

3.0 - 4.0 years

10 - 14 Lacs

Pune

Work from Office

Role & responsibilities Design and implement AI agent workflows. Develop end-to-end intelligent pipelines and multi-agent systems (e.g., LangGraph/LangChain workflows) that coordinate multiple LLM-powered agents to solve complex tasks. Create graph-based or state-machine architectures for AI agents, chaining prompts and tools as needed. Build and fine-tune generative models. Develop, train, and fine-tune advanced generative models (transformers, diffusion models, VAEs, GANs, etc.) on domain-specific data. Deploy and optimize foundation models (such as GPT, LLaMA, Mistral) in production, adapting them to our use cases through prompt engineering and supervised fine-tuning. Develop data pipelin...

Posted 5 months ago

AI Match Score
Apply

4.0 - 9.0 years

6 - 11 Lacs

Hyderabad

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databrick...

Posted 5 months ago

AI Match Score
Apply

3.0 - 4.0 years

5 - 7 Lacs

Chandigarh

Work from Office

Key Responsibilities Design, develop, and maintain scalable ETL workflows using Cloud Data Fusion and Apache Airflow . Configure and manage various data connectors (e.g., Cloud Storage, Pub/Sub, JDBC, SaaS APIs) for batch and streaming data ingestion. Implement data transformations, cleansing, and enrichment logic in Python (and SQL) to meet analytic requirements. Optimize BigQuery data models (fact/dimension tables, partitioning, clustering) for performance and cost-efficiency. Monitor, troubleshoot, and tune pipeline performance; implement robust error-handling and alerting mechanisms. Collaborate with data analysts, BI developers, and architects to understand data requirements and deliver...

Posted 5 months ago

AI Match Score
Apply

8.0 - 10.0 years

40 - 45 Lacs

Bengaluru

Hybrid

Position: Senior Data Engineer Location: Bangalore, India About Dodge Dodge Construction Network exists to deliver the comprehensive data and connections the construction industry needs to build thriving communities. Our legacy is deeply rooted in empowering our customers with transformative insights, igniting their journey towards unparalleled business expansion and success. We serve decision-makers who seek reliable growth and who value relationships built on trust and quality. By combining our proprietary data with cutting-edge software, we deliver to our customers the essential intelligence needed to excel within their respective landscapes. We propel the construction industry forward by...

Posted 5 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies