867 Data Pipeline Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Job Description: Job Title: ETL Testing Experience: 5-8 Years location: Chennai, Bangalore Employment Type: Full Time. J ob Type: Work from Office (Monday - Friday) Shift Timing: 12:30 PM to 9:30 PM Required Skills: Analytics skills to understand requirements to develop test cases, understand and manage data, strong SQL skills. Hands on testing of data pipelines built using Glue, S3, Redshift and Lambda, collaborate with developers to build automated testing where appropriate, understanding of data concepts like data lineage, data integrity and quality, experience testing financial data is a plus

Posted 2 months ago

AI Match Score
Apply

5.0 - 8.0 years

10 - 16 Lacs

Bengaluru

Work from Office

Perform gap analysis & assess the impact of AI implementations on business processes Develop prototypes & proof-of-concept AI solutions using tools like Python, TensorFlow, or R Support UAT Required Candidate profile Experience in AI/ML or data analytics projects, AI/ML concepts, data pipelines, statistical modeling Proficiency in Python, R, or SQL preferred, AI tools - Azure AI, AWS SageMaker, Google AI, OpenAI

Posted 2 months ago

AI Match Score
Apply

8.0 - 13.0 years

25 - 40 Lacs

Hyderabad

Work from Office

Key Responsibilities Design conformed star & snowflake schemas , implement SCD2 dimensions and fact tables. Lead Spark (PySpark/Scala) or AWSGlue ELT pipelines from RDSZeroETL/S3 into Redshift. Tune RA3 clusterssort/dist keys, WLM queues, Spectrum partitionsfor subsecond BI queries. Establish dataquality, lineage, and costgovernance dashboards using CloudWatch & Terraform/CDK. Collaborate with Product & Analytics to translate HR KPIs into selfservice data marts. Mentor junior engineers; drive documentation and coding standards. MustHave Skills AmazonRedshift (sort & dist keys, RA3, Spectrum) Spark on EMR/Glue (PySpark or Scala) Dimensional modelling (Kimball), star schema, SCD2 Advanced SQL ...

Posted 2 months ago

AI Match Score
Apply

10.0 - 16.0 years

50 - 100 Lacs

Bengaluru

Hybrid

Title: Principal Data Engineer Keywords: Java | AWS |SPARK |KAFKA | MySQL | ElasticSearch Office location : Bangalore, EGL - Domlur Experience: 10 to 16y Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance t...

Posted 2 months ago

AI Match Score
Apply

8.0 - 13.0 years

0 Lacs

Pune

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using SQL, AWS & Snowflake. * Collaborate with cross-functional teams on data warehousing projects.

Posted 2 months ago

AI Match Score
Apply

3.0 - 8.0 years

15 - 27 Lacs

Pune, Bengaluru

Work from Office

Velotio Technologies is a product engineering company working with innovative startups and enterprises. We have provided full-stack product development for 110+ startups across the globe, building products in the cloud-native, data engineering, B2B SaaS, IoT & Machine Learning space. Our team of 400+ elite software engineers solves hard technical problems while transforming customer ideas into successful products. Requirements Implement a cloud-native analytics platform with high performance and scalability Build an API-first infrastructure for data in and data out Build data ingestion capabilities for internal data, as well as external spend data. Leverage data classification AI algorithms ...

Posted 2 months ago

AI Match Score
Apply

4.0 - 7.0 years

10 - 20 Lacs

Hyderabad, Gurugram, Bengaluru

Work from Office

Proven experience as a Data Engineer. Strong experience on SQL and ETL, Hadoop and its ecosystem. Expertise in full and incremental data loading techniques. Contact / Whatsapp: 8712691790 / navya.k@liveconnections.in *JOB IN BANGALORE* Required Candidate profile "• Design, develop, and maintain scalable data pipelines and systems for data processing. • Utilize Hadoop and related technologies to manage large-scale data processing.

Posted 2 months ago

AI Match Score
Apply

9.0 - 14.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Key Responsibilities : Oversee the entire data infrastructure to ensure scalability, operation efficiency and resiliency. - Mentor junior data engineers within the organization. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azu...

Posted 2 months ago

AI Match Score
Apply

7.0 - 12.0 years

8 - 18 Lacs

Bengaluru

Hybrid

Role - Cyber Data Pipeline Engineer Exp 7-14 Years Location – Bengaluru Description Overview We are seeking a skilled and motivated Data Pipeline Engineer to join our team. In this role, you will manage and maintain critical data pipeline platforms that collect, transform, and transmit cyber events data to downstream platforms, such as ElasticSearch and Splunk. You will be responsible for ensuring the reliability, scalability, and performance of the pipeline infrastructure while building complex integrations with cloud and on-premises cyber systems. Our key stakeholders are cyber teams including security response, investigations and insider threat. Role Profile A successful applicant will co...

Posted 2 months ago

AI Match Score
Apply

2.0 - 4.0 years

4 - 5 Lacs

Hyderabad

Work from Office

Job description As a key member of the data team, you will be responsible for developing and managing business intelligence solutions, creating data visualizations, and providing actionable insights to support decision-making processes. You will work with the stakeholders closely to understand the business requirements, translate them into technical tasks and develop robust data analytics and BI solutions while ensuring high-quality deliverables and client satisfaction. Desired Skills and Experience Essential skills 3-4 years of experience in development and deployment of high-performance, complex Tableau dashboards Experienced in writing complicated tableau calculations and using Alteryx so...

Posted 2 months ago

AI Match Score
Apply

8.0 - 12.0 years

35 - 45 Lacs

Hyderabad, Bengaluru

Work from Office

We are seeking a hands-on and forward-thinking Principal or Lead Engineer with deep expertise in Java-based backend development and a strong grasp of Generative AI technologies . You will lead the design, development, and deployment of Gen AI-based solutions, working across data, ML engineering, and software engineering teams to integrate AI capabilities into our core platforms and client-facing products. Responsibilities Lead end-to-end architecture and implementation of Generative AI solutions integrated with Java-based applications Evaluate, fine-tune, and deploy foundational and LLM models (e.g., GPT, LLaMA, Claude, Gemini) for use cases such as code generation, summarization, intelligen...

Posted 2 months ago

AI Match Score
Apply

8.0 - 10.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Must have : - Strong on programming languages like Python, Java - One cloud hands-on experience (GCP preferred) - Experience working with Dockers - Environments managing (e.g venv, pip, poetry, etc.) - Experience with orchestrators like Vertex AI pipelines, Airflow, etc. - Understanding of full ML Cycle end-to-end - Data engineering, Feature Engineering techniques - Experience with ML modelling and evaluation metrics - Experience with Tensorflow, Pytorch or another framework - Experience with Models monitoring - Advance SQL knowledge - Aware of Streaming concepts like Windowing , Late arrival , Triggers etc. Good to have : - Hyperparameter tuning experience. - Proficient in either Apache Spa...

Posted 2 months ago

AI Match Score
Apply

2.0 - 6.0 years

3 - 7 Lacs

Gurugram

Work from Office

We are looking for a Pyspark Developer that loves solving complex problems across a full spectrum of technologies. You will help ensure our technological infrastructure operates seamlessly in support of our business objectives. Responsibilities Develop and maintain data pipelines implementing ETL processes. Take responsibility for Hadoop development and implementation. Work closely with a data science team implementing data analytic pipelines. Help define data governance policies and support data versioning processes. Maintain security and data privacy working closely with Data Protection Officer internally. Analyse a vast number of data stores and uncover insights. Skillset Required Ability...

Posted 2 months ago

AI Match Score
Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurugram

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databrick...

Posted 2 months ago

AI Match Score
Apply

6.0 - 9.0 years

9 - 13 Lacs

Gurugram

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes...

Posted 2 months ago

AI Match Score
Apply

3.0 - 7.0 years

12 - 15 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

We are looking for an experienced Data Engineer/BI Developer with strong hands-on expertise in Microsoft Fabric technologies, including OneLake, Lakehouse, Data Lake, Warehouse, and Real-Time Analytics, along with proven skills in Power BI, Azure Synapse Analytics, and Azure Data Factory (ADF). The ideal candidate should also possess working knowledge of DevOps practices for data engineering and deployment automation. Key Responsibilities: Design and implement scalable data solutions using Microsoft Fabric components: OneLake, Data Lake, Lakehouse, Warehouse, and Real-Time Analytics Build and manage end-to-end data pipelines integrating structured and unstructured data from multiple sources....

Posted 2 months ago

AI Match Score
Apply

3.0 - 6.0 years

5 - 8 Lacs

Gurugram

Work from Office

About the job : - As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. - You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. - This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What You'll Do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solu...

Posted 2 months ago

AI Match Score
Apply

3.0 - 6.0 years

12 - 16 Lacs

Thiruvananthapuram

Work from Office

AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Data pipeline development & orchestration using AWS Glue Strong programming skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 3 to 5 Years Client-facing and team leadership experience Candidates have to work with UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones

Posted 2 months ago

AI Match Score
Apply

7.0 - 10.0 years

9 - 12 Lacs

Hyderabad

Hybrid

Responsibilities of the Candidate : - Be responsible for the design and development of big data solutions. Partner with domain experts, product managers, analysts, and data scientists to develop Big Data pipelines in Hadoop - Be responsible for moving all legacy workloads to a cloud platform - Work with data scientists to build Client pipelines using heterogeneous sources and provide engineering services for data PySpark science applications - Ensure automation through CI/CD across platforms both in cloud and on-premises - Define needs around maintainability, testability, performance, security, quality, and usability for the data platform - Drive implementation, consistent patterns, reusable...

Posted 2 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at GoKwik, you will play a crucial role in collaborating with product managers, data scientists, business intelligence teams, and software development engineers to develop and implement data-driven strategies throughout the organization. Your responsibilities will revolve around identifying and executing process enhancements, data model and architecture creation, pipeline development, and data application deployment. Your focus will be on continuously improving data optimization processes, ensuring data quality and security, and creating new data models and pipelines as necessary. You will strive for high performance, operational excellence, accuracy, and reliability withi...

Posted 2 months ago

AI Match Score
Apply

7.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Bengaluru

Hybrid

Job Role: Backend and Data Pipeline Engineer Location: Hyderabad/Bangalore(Hybrid) Job Type : Fulltime ** Only Immediate Joiners ** Job Summary: The Team: Were investing in technology to develop new products that help our customers drive their growth and transformation agenda. These include new data integration, advanced analytics, and modern applications that address new customer needs and are highly visible and strategic within the organization. Do you love building products on platforms at scale while leveraging cutting edge technology? Do you want to deliver innovative solutions to complex problems? If so, be part of our mighty team of engineers and play a key role in driving our busines...

Posted 2 months ago

AI Match Score
Apply

9.0 - 12.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Primarily looking for a candidate with strong expertise in data-related skills, including : - SQL & Database Management : Deep knowledge of relational databases (PostgreSQL), cloud-hosted data platforms (AWS, Azure, GCP), and data warehouses like Snowflake. - ETL/ELT Tools : Experience with SnapLogic, StreamSets, or DBT for building and maintaining data pipelines. / ETL Tools Extensive Experience on data Pipelines - Data Modeling & Optimization : Strong understanding of data modeling, OLAP systems, query optimization, and performance tuning. - Cloud & Security : Familiarity with cloud platforms and SQL security techniques (e.g., data encryption, TDE). - Data Warehousing : Experience managing...

Posted 2 months ago

AI Match Score
Apply

4.0 - 8.0 years

0 - 0 Lacs

Pune

Hybrid

So, what’s the role all about? Within Actimize, the AI and Analytics Team is developing the next generation advanced analytical cloud platform that will harness the power of data to provide maximum accuracy for our clients’ Financial Crime programs. As part of the PaaS/SaaS development group, you will be responsible for developing this platform for Actimize cloud-based solutions and to work with cutting edge cloud technologies. How will you make an impact? NICE Actimize is the largest and broadest provider of financial crime, risk and compliance solutions for regional and global financial institutions & has been consistently ranked as number one in the space At NICE Actimize, we recognize th...

Posted 2 months ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 13 Lacs

Pune, Chennai, Bengaluru

Work from Office

Role & responsibilities Key Responsibilities: Develop, test, and deploy robust Dashboards and reports in Power BI using SAP HANA and Snowflake Datasets Basic Qualifications Excellent verbal and written communication skills 5+ years of experience working with Power BI with SAP HANA and Snowflake Datasets 5+ hands-on experience in developing moderate to complex ETL data pipelines is a plus 5+ years of hands-on experience with ability to resolve complex SQL query performance issues. 5+ years of ETL Python development experience; experience parallelizing pipelines a plus Demonstrated ability to troubleshoot complex query, pipeline, and data quality issues

Posted 2 months ago

AI Match Score
Apply

8.0 - 13.0 years

18 - 20 Lacs

Noida

Remote

Job Title: Cloud Data Architect Location: 100% Remote Time: Over-lap US CST hours Duration: 6+ Months Job Description: Seeking candidates with at least 5-7 years' experience. Strong data architect, done data engineering and data architect previously. Strong SQL and Snowflake experience required. Manufacturing industry is a must have Snowflake / SQL Architect Architect and manage scalable data solutions using Snowflake and advanced SQL, optimizing performance for analytics and reporting. Design and implement data pipelines, data warehouses, and data lakes, ensuring efficient data ingestion and transformation. Develop best practices for data security, access control, and compliance within clou...

Posted 2 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies