874 Data Pipeline Jobs - Page 29

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

20 - 22 Lacs

Bengaluru

Remote

Collaborate with senior stakeholders to gather requirements, address constraints, and craft adaptable data architectures. Convert business needs into blueprints, guide agile teams, maintain quality data pipelines, and drive continuous improvements. Required Candidate profile 7+yrs in data roles(Data Architect/Engineer). Skilled in modelling (incl. Data Vault 2.0), Snowflake, SQL/Python, ETL/ELT, CI/CD, data mesh, governance & APIs. Agile; strong stakeholder & comm skills. Perks and benefits As per industry standards

Posted 4 months ago

AI Match Score
Apply

8.0 - 13.0 years

15 - 25 Lacs

Bengaluru

Hybrid

We are looking for a seasoned Data Architect (Senior Engineer) to lead the design and implementation of scalable, secure, and privacy-compliant data architectures that support feedback-driven AI systems.

Posted 4 months ago

AI Match Score
Apply

8.0 - 13.0 years

15 - 25 Lacs

Chennai

Work from Office

Required Skills and Qualifications Bachelors/Master’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Solution Architect or a similar role. Expertise in programming languages and frameworks: Java, Angular, Python, C++ Proficiency in AI/ML frameworks and libraries such as TensorFlow, PyTorch, Scikit-learn, or Keras. Experience in deploying AI models in production, including optimizing for performance and scalability. Understanding of deep learning, NLP, computer vision, or generative AI techniques. Hands-on experience with model fine-tuning, transfer learning, and hyperparameter optimization. Strong knowledge of enterprise architecture frameworks...

Posted 4 months ago

AI Match Score
Apply

6.0 - 10.0 years

15 - 30 Lacs

Indore, Jaipur, Bengaluru

Work from Office

Exp in dashboard story development, dashboard creation, and data engineering pipelines. Manage and organize large volumes of application log data using Google Big Query Exp with log analytics, user engagement metrics, and product performance metrics Required Candidate profile Exp with tool like Tableau Power BI, or ThoughtSpot AI . Understand log data generated by Python-based applications. Ensure data integrity, consistency, and accessibility for analytical purposes.

Posted 4 months ago

AI Match Score
Apply

2.0 - 4.0 years

3 - 7 Lacs

Bengaluru

Work from Office

There is a need for a resource (proficient) for a Data Engineer with experience monitoring and fixing jobs for data pipelines written in Azure data Factory and Python Design and implement data models for Snowflake to support analytical solutions. Develop ETL processes to integrate data from various sources into Snowflake. Optimize data storage and query performance in Snowflake. Collaborate with cross-functional teams to gather requirements and deliver scalable data solutions. Monitor and maintain Snowflake environments, ensuring optimal performance and data security. Create documentation for data architecture, processes, and best practices. Provide support and training for teams utilizing S...

Posted 4 months ago

AI Match Score
Apply

10.0 - 15.0 years

12 - 16 Lacs

Pune, Bengaluru

Work from Office

We are seeking a talented and experienced Kafka Architect with migration experience to Google Cloud Platform (GCP) to join our team. As a Kafka Architect, you will be responsible for designing, implementing, and managing our Kafka infrastructure to support our data processing and messaging needs, while also leading the migration of our Kafka ecosystem to GCP. You will work closely with our engineering and data teams to ensure seamless integration and optimal performance of Kafka on GCP. Responsibilities: Discovery, analysis, planning, design, and implementation of Kafka deployments on GKE, with a specific focus on migrating Kafka from AWS to GCP. Design, architect and implement scalable, hig...

Posted 4 months ago

AI Match Score
Apply

4.0 - 9.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Data Transformation: Utilize Data Build Tool (dbt) to transform raw data into curated data models according to business requirements. Implement data transformations and aggregations to support analytical and reporting needs. Orchestration and Automation: Design and implement automated workflows using Google Cloud Composer to orchestrate data pipelines and ensure timely data delivery. Monitor and troubleshoot data pipelines, identifying and resolving issues proactively. Develop and maintain documentation for data pipelines and workflows. GCP Expertise: Leverage GCP services, including BigQuery, Cloud Storage, and Pub/Sub, to build a robust and scalable data platform. Optimize BigQuery perform...

Posted 4 months ago

AI Match Score
Apply

3.0 - 7.0 years

10 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 8 to 24 LPA Exp: 3 to 7 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Job Title: Senior Data Engineer Job Summary: We are looking for an experienced Senior Data Engineer with 5+ years of hands-on experience in cloud data engineering platforms, specifically AWS, Databricks, and Azure. The ideal candidate will play a critical role in designing, building, and maintaining scalable data pipelines and infrastructure to support our analytics and business intelligence initiatives. Key Responsibilities: Design, develop, and optimize scalable data pipelines using AWS services (e.g., S3, Glue, Redshift, Lambda). Build and maintain ETL/ELT workflows leveraging Databricks and ...

Posted 4 months ago

AI Match Score
Apply

3.0 - 7.0 years

10 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 8 to 24 LPA Exp: 3 to 7 years Location: Gurgaon (Hybrid) Notice: Immediate to 30 days..!! Job Profile: Experienced Data Engineer with a strong foundation in designing, building, and maintaining scalable data pipelines and architectures. Skilled in transforming raw data into clean, structured formats for analytics and business intelligence. Proficient in modern data tools and technologies such as SQL, T-SQL, Python, Databricks, and cloud platforms (Azure). Adept at data wrangling, modeling, ETL/ELT development, and ensuring data quality, integrity, and security. Collaborative team player with a track record of enabling data-driven decision-making across business units. As a Data engin...

Posted 4 months ago

AI Match Score
Apply

8.0 - 13.0 years

10 - 15 Lacs

Bengaluru

Work from Office

In this role, you will play a key role in designing, building, and optimizing scalable data products within the Telecom Analytics domain. You will collaborate with cross-functional teams to implement AI-driven analytics, autonomous operations, and programmable data solutions. This position offers the opportunity to work with cutting-edge Big Data and Cloud technologies, enhance your data engineering expertise, and contribute to advancing Nokias data-driven telecom strategies. If you are passionate about creating innovative data solutions, mastering cloud and big data platforms, and working in a fast-paced, collaborative environment, this role is for you! You have: Bachelors or masters degree...

Posted 4 months ago

AI Match Score
Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Employment Type : Full Time, Permanent Working mode : Regular Job Description : Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize ...

Posted 4 months ago

AI Match Score
Apply

7.0 - 12.0 years

5 - 15 Lacs

Bengaluru

Remote

Role & responsibilities Responsibilities: Design, develop, and maintain Collibra workflows tailored to our project's specific needs. Collaborate with cross-functional teams to ensure seamless integration of Collibra with other systems. Educate team members on Collibra's features and best practices. (or) Educate oneself on Collibra's features and best practices. Engage with customers to gather requirements and provide solutions that meet their needs. Stay updated with the latest developments in Collibra and data engineering technologies. Must-Haves: Excellent communication skills in English (reading, writing, and speaking). Background in Data Engineering or related disciplines. Eagerness to l...

Posted 4 months ago

AI Match Score
Apply

9.0 - 14.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. A Senior Data Engineer designs and oversees the entire data infrastructure, data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL...

Posted 4 months ago

AI Match Score
Apply

7.0 - 12.0 years

15 - 22 Lacs

Gurugram

Work from Office

Data Scientist: 7+ years experience in AI/ML & Big Data Proficient in Python, SQL, TensorFlow, PyTorch, Scikit-learn, Spark MLlib Cloud proficiency (GCP, AWS/Azure) Strong analytical & comms skills Location: Gurgaon Salary: 22 LPA Immediate joiners

Posted 4 months ago

AI Match Score
Apply

5.0 - 7.0 years

13 - 15 Lacs

Pune

Work from Office

About us: We are building a modern, scalable, fully automated on-premise data platform , designed to handle complex data workflows, including data ingestion, ETL processes, physics-based calculations and machine learning predictions. Orchestrated using Dagster , our platform integrates with multiple data sources, edge devices, and storage systems. A core principle of our architecture is self-service : granting data scientists, analysts, and engineers granular control over the entire journey of their data assets as well empowering teams to modify and extend their data pipelines with minimal friction. We're looking for a hands-on Data Engineer to help develop, maintain, and optimize this platf...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 14 Lacs

Bengaluru

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 14 Lacs

Gurugram

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 4 months ago

AI Match Score
Apply

7.0 - 11.0 years

20 - 30 Lacs

Hyderabad, Bengaluru

Hybrid

Responsibilities includes working on MDM platforms like ETL, data modelling, data warehousing and manage database related complex analysis, design, implement and support moderate to large sized databases. The role will help in providing production support and enhance existing data assets, design and develop ETL processes. Job Description: He/She will be responsible for design and development of ETL processes for large data warehouse. Required Qualifications: Experience in Master Data Management Platform like ETL or EAI, Data warehousing concepts, code management, automated testing Experience in developing ETL design guidelines, standards and procedures to ensure a manageable ETL infrastructu...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 14 Lacs

Mumbai

Remote

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 14 Lacs

Jaipur

Remote

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 4 months ago

AI Match Score
Apply

5.0 - 10.0 years

8 - 14 Lacs

Chennai

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted 4 months ago

AI Match Score
Apply

14.0 - 24.0 years

35 - 55 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

About the role We are seeking a Sr. Practice Manager with Insight , you will be involved in different phases related to Software Development Lifecycle including Analysis, Design, Development and Deployment. We will count on you to be proficient in Software Design and Development, data modelling, data processing and data visualization. Along the way, you will get to: Help customers leverage existing data resources, implement new technologies and tooling to enable data science and data analytics Track the performance of our resources and related capabilities Experience mentoring and managing other data engineers and ensuring data engineering best practices are being followed. Constantly evolve...

Posted 4 months ago

AI Match Score
Apply

4.0 - 8.0 years

1 - 4 Lacs

New Delhi, Bengaluru

Work from Office

Role Overview Were hiring a top-tier AI Solution Architect who thrives on building scalable, open- source-first frameworks in Python for industrial AI applications. You will lead the architecture, design, and deployment of Neuralixs proprietary DLT (Data Lifecycle Templatization) engineturning complex industrial problems into elegant, reusable code artifacts. This is a career-defining opportunity to build the foundational platform for one of the most audacious AI startups in the world. What You'll Do Architect and build modular, reusable Python frameworks that streamline industrial data pipelines, signal processing, and AI model deployment Collaborate directly with product teams, data scient...

Posted 4 months ago

AI Match Score
Apply

9.0 - 13.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Senior Software Engineer (12-15 yrs) Supply chain retail technology Who We Are :Wayfair runs the largest custom e-commerce large parcel network in the United States, approximately 1 6 million square meters of logistics space The nature of the network is inherently a highly variable ecosystem that requires flexible, reliable, and resilient systems to operate efficiently We are looking for a passionate Backend Software Engineer to join the Fulfilment Optimisation team What Youll D Partner with your business stakeholders to provide them with transparency, data, and resources to make informed decisions Be a technical leader within and across the teams you work with Drive high impact architectura...

Posted 4 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies