874 Data Pipeline Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 9.0 years

22 - 37 Lacs

pune

Hybrid

Role & responsibilities 3+ years of experience in security engineering or technical infrastructure roles. A minimum of 3 years of Cyber Security experience on one of the following areas: Cloud (AWS and Azure), Infrastructure (IAM, Network, endpoint, etc.), or Data (DLP, data lifecycle management, etc.). Deep and hands-on experience designing security architectures and solutions for reliable and scalable data infrastructure, cloud and data products in complex environments. Development experience in one or more object-oriented programming languages (e.g., Python, Scala, Java, C#) and/or development experience in one or more cloud environments (including AWS, Azure, Alibaba, etc.). Exposure/exp...

Posted Date not available

AI Match Score
Apply

3.0 - 5.0 years

5 - 14 Lacs

hyderabad

Work from Office

ETL Engineer 3-5 Years Python + SQL + ETL on python + PySPARK ETL jobs + SQL/NO SQL Optional : KAFKA/RabitMQ or Streaming frameworks. Azure Data bricks or Other Data Lake exposure DataLake/Data Mart concept and clear on coding rating in coding skills must : 7/10 - Python + PySPARK + SQL MySQL/MONGO/POSTGRESS/MS-SQL

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

10 - 20 Lacs

bengaluru

Remote

Strong expertise in Microsoft Fabric, Python/PySpark, SQL, and data warehousing/lakehouse concepts. Excellent analytical, problem-solving, and communication skills with experience in agile environments; certifications

Posted Date not available

AI Match Score
Apply

6.0 - 10.0 years

14 - 24 Lacs

pune, chennai

Work from Office

Mandatory - Experience and knowledge in designing, implementing, and managing non-relational data stores (e.g., MongoDB, Cassandra, DynamoDB), focusing on flexible schema design, scalability, and performance optimization for handling large volumes of unstructured or semi-structured data. Mainly client needs No SQL DB, either MongoDB or HBase Data Pipeline Development: Design, develop, test, and deploy robust, high-performance, and scalable ETL/ELT data pipelines using Scala and Apache Spark to ingest, process, and transform large volumes of structured and unstructured data from diverse sources. Big Data Expertise: Leverage expertise in the Hadoop ecosystem (HDFS, Hive, etc.) and distributed ...

Posted Date not available

AI Match Score
Apply

1.0 - 5.0 years

15 - 25 Lacs

bengaluru

Work from Office

Key Responsibilities Synthetic Data Generation & Quality Assurance Design and implement scalable synthetic data generation systems to support model training Develop and maintain data quality validation pipelines ensuring synthetic data meets training requirements Build automated testing frameworks for synthetic data generation workflows Collaborate with ML teams to optimize synthetic data for model performance APIs & Integration Develop and maintain REST API integrations across multiple enterprise platforms Implement robust data exchange, transformation, and synchronisation logic between systems Ensure error handling, retries, and monitoring for all integration workflows Data Quality & Testi...

Posted Date not available

AI Match Score
Apply

11.0 - 21.0 years

35 - 80 Lacs

hyderabad

Hybrid

Job Title: Principal Data Engineer, Logistics Employment Type: Full Time Experience: 12+ Years About the Role: We are looking for a Principal Data Engineer to lead the design and delivery of scalable data solutions using Azure Data Factory and Azure Databricks. This is a consulting-focused role that requires strong technical expertise, stakeholder engagement, and architectural thinking. You will work closely with business, functional, and technical teams to define data strategies, design robust pipelines, and ensure smooth delivery in an Agile environment. Responsibilities Collaborate with business and technology stakeholders to gather and understand data needs Translate functional requireme...

Posted Date not available

AI Match Score
Apply

10.0 - 16.0 years

27 - 42 Lacs

bengaluru

Work from Office

Primary Skill : GCP, Airflow, Python (developer) Location: Noida Experience: 7 to 16 years Detailed JD : 5+ years of experience in Airflow & ETL processes(Python). Data Engineering exp. in cloud (preferably GCP) Data pipeline development experience Cloud Storage and other Data Engineering cloud services knowledge Strong Python, Pyspark, SQL/Bigquery and other Databases. Test Driven Development experience Experience building libraries Pandas, NumPy, GCP, Elasticsearch, big query Advanced SQL preferred, basic required.

Posted Date not available

AI Match Score
Apply

3.0 - 7.0 years

10 - 20 Lacs

noida, gurugram, delhi / ncr

Hybrid

Salary: 10 to 20 LPA Exp: 3 to 7 years Location : Gurgaon Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 3+ years of experience in Data Engineering, ETL, Data Warehousing Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data warehousing Exp in Data modelling Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requi...

Posted Date not available

AI Match Score
Apply

6.0 - 11.0 years

1 - 3 Lacs

hyderabad

Work from Office

SUMMARY Required Skills & Qualifications Experience: 8-12 years of overall professional experience , with a minimum of 7 years of hands-on experience in data engineering. Snowflake Expertise: Proven, in-depth experience with the Snowflake data warehouse platform, including a strong understanding of its architecture, features, and best practices. SQL Mastery: Expert-level SQL skills for data manipulation, complex query writing, and performance tuning. ETL/ELT: Extensive experience with ETL/ELT tools and techniques, with a focus on building scalable data pipelines. Scripting: Proficiency in at least one scripting language ( Python is highly preferred ) for data processing and automation. Cloud...

Posted Date not available

AI Match Score
Apply

8.0 - 13.0 years

40 - 50 Lacs

hyderabad

Work from Office

Join our dynamic CRM data engineering team as a full-time engineer, where you will have the opportunity to work on big data ecosystems that support the eCommerce and CRM Domain for Fanatics. This role focuses on developing, expanding, and optimizing our data pipelines and architecture, ensuring efficient data flow and effective data collection across cross-functional teams. What are we looking for? - B Tech /M Tech in Computer Science or related field, or an equivalent combination of education and experience. - A minimum of 5-6 years of experience in software engineering, with hands-on experience in building data pipelines and big data technologies. - Proficiency with Big Data technologies s...

Posted Date not available

AI Match Score
Apply

7.0 - 10.0 years

20 - 25 Lacs

noida

Remote

Focus : Build and maintain data pipelines and infrastructure to feed AI models. Responsibilities: Architect and implement efficient ETL/ELT pipelines to ingest data from diverse internal and external sources (APIs, databases, streaming platforms). Optimize data storage, indexing, and retrieval to support high-volume, low-latency AI workloads. Ensure data quality by implementing validation, cleansing, and anomaly detection mechanisms. Manage cloud data infrastructure (AWS Glue, Databricks, Snowflake, Kafka, etc.) or equivalent on-prem tools

Posted Date not available

AI Match Score
Apply

11.0 - 19.0 years

20 - 35 Lacs

faridabad

Remote

We are seeking an experienced and highly skilled Senior Data Engineer to drive data-driven decision-making and innovation. In this role, you will leverage your expertise in advanced analytics, machine learning, and big data technologies to solve complex business challenges. You will be responsible for designing predictive models, building scalable data pipelines, and uncovering actionable insights from structured and unstructured datasets. Collaborating with cross-functional teams, your work will empower strategic decision-making and foster a data-driven culture across the organization. Role & responsibilities 1 Position Overview: We are seeking an experienced and highly skilled Senior Data ...

Posted Date not available

AI Match Score
Apply

5.0 - 9.0 years

19 - 27 Lacs

hyderabad, chennai, bengaluru

Hybrid

Position Overview Annalect is currently seeking a data engineer to join our technology team. In this role you will build Annalect products which sit atop cloud-based data infrastructure. We are looking for people who have a shared passion for technology, design & development, data, and fusing these disciplines together to build cool things. In this role, you will work on one or more software and data products in the Annalect Engineering Team. You will participate in technical architecture, design and development of software products as well as research and evaluation of new technical solutions. Key Responsibilities: Steward data and compute environments to facilitate usage of data assets Des...

Posted Date not available

AI Match Score
Apply

5.0 - 8.0 years

20 - 30 Lacs

bengaluru

Work from Office

Cloud Data Engineer Req number: R5934 Employment type: Full time Worksite flexibility: Remote Who we are CAI is a global technology services firm with over 8,500 associates worldwide and a yearly revenue of $1 billion+. We have over 40 years of excellence in uniting talent and technology to power the possible for our clients, colleagues, and communities. As a privately held company, we have the freedom and focus to do what is right—whatever it takes. Our tailor-made solutions create lasting results across the public and commercial sectors, and we are trailblazers in bringing neurodiversity to the enterprise. Job Summary We are seeking a motivated Cloud Data Engineer that has experience in bu...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

pune

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

gurugram

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

surat

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

bengaluru

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 8.0 years

11 - 20 Lacs

noida, indore, pune

Hybrid

Responsilbe for SAS to python code conversion. Feature Engineering, EDA, Pipeline creation, Model training, and hyperparameter tuning with structured and unstructured data sets.

Posted Date not available

AI Match Score
Apply

8.0 - 10.0 years

25 - 35 Lacs

pune

Hybrid

Hiring Technical Lead (Python, Full Stack) with 8–10 yrs exp for a hybrid role. Lead team & projects, work on Python/Django/Flask, AWS, REST APIs, CI/CD, DBs. Strong leadership, client handling & communication skills required.

Posted Date not available

AI Match Score
Apply

7.0 - 8.0 years

2 - 5 Lacs

hyderabad

Work from Office

Job Title : Databricks Engineer Experience Required : Minimum 7+ Years (Mandatory) Job Type : Full-time | Day Shift (IST) We are looking for a skilled and experienced Databricks Engineer to join our team at our Hyderabad (Madhapur) office. The ideal candidate will have a strong background in data engineering, hands-on experience with Databricks, and a solid understanding of cloud platforms (AWS, Azure, or GCP). This is a great opportunity to work on cutting-edge data projects in a collaborative and fast-paced environment. Key Responsibilities : - Design and implement scalable and efficient data pipelines, ETL/ELT processes, and data integration solutions using Databricks. - Work with large-s...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

chennai

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

hyderabad

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

kolkata

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply

5.0 - 10.0 years

5 - 9 Lacs

ahmedabad

Work from Office

Key Responsibilities : - Design and develop scalable PySpark pipelines to ingest, parse, and process XML datasets with extreme hierarchical complexity. - Implement efficient XPath expressions, recursive parsing techniques, and custom schema definitions to extract data from nested XML structures. - Optimize Spark jobs through partitioning, caching, and parallel processing to handle terabytes of XML data efficiently. - Transform raw hierarchical XML data into structured DataFrames for analytics, machine learning, and reporting use cases. - Collaborate with data architects and analysts to define data models for nested XML schemas. - Troubleshoot performance bottlenecks and ensure reliability in...

Posted Date not available

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies