757 Data Pipelines Jobs - Page 27

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

12 - 20 Lacs

Pune, Delhi / NCR, Mumbai (All Areas)

Hybrid

Role & responsibilities (Exp is required 6+ Years) Job Description: Enterprise Business Technology is on a mission to support and create enterprise software for our organization. We're a highly collaborative team that interlocks with corporate functions such as Finance and Product teams to deliver value with innovative technology solutions. Each day, thousands of people rely on Enlyte's technology and services to help their customers during challenging life events. We're looking for a remote Senior Data Analytics Engineer for our Corporate Analytics team. Opportunity - Technical lead for our corporate analytics practice using dbt, Dagster, Snowflake and Power BI, SQL and Python Responsibilit...

Posted 3 months ago

AI Match Score
Apply

4.0 - 6.0 years

6 - 8 Lacs

Bengaluru, Bellandur

Hybrid

Hiring an AWS Data Engineer for a 6-month hybrid contractual role based in Bellandur, Bengaluru. The ideal candidate will have 4-6 years of experience in data engineering, with strong expertise in AWS services (S3, EC2, RDS, Lambda, EKS), PostgreSQL, Redis, Apache Iceberg, and Graph/Vector Databases. Proficiency in Python or Golang is essential. Responsibilities include designing and optimizing data pipelines on AWS, managing structured and in-memory data, implementing advanced analytics with vector/graph databases, and collaborating with cross-functional teams. Prior experience with CI/CD and containerization (Docker/Kubernetes) is a plus.

Posted 3 months ago

AI Match Score
Apply

3.0 - 7.0 years

3 - 7 Lacs

Gurgaon, Haryana, India

On-site

This position requires a proven track record of transforming processes, driving customer value, cost savings with experience in running end-to-end analytics for large-scale organizations. Design, build, and maintain scalable data pipelines to support analytics, reporting, and advanced modeling needs. Collaborate with consultants, analysts, and clients to understand data requirements and translate them into effective data solutions. Ensure data accuracy, quality, and integrity through validation, cleansing, and transformation processes. Develop and optimize data models, ETL workflows, and database architectures across cloud and on-premises environments. Support data-driven decision-making by ...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

12 - 20 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Responsibilities :- Build and manage data infrastructure on AWS , including S3, Glue, Lambda, Open Search, Athena, and CloudWatch using IaaC tool like Terraform Design and implement scalable ETL pipelines with integrated validation and monitoring. Set up data quality frameworks using tools like Great Expectations , integrated with PostgreSQL or AWS Glue jobs. Implement automated validation checks at key points in the data flow: post-ingest, post-transform, and pre-load. Build centralized logging and alerting pipelines (e.g., using CloudWatch Logs, Fluent bit ,SNS, File bit ,Logstash , or third-party tools). Define CI/CD processes for deploying and testing data pipelines (e.g., using Jenkins,...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

10 - 15 Lacs

Pune

Work from Office

We are looking for a highly skilled and experienced Data Engineer with over 5 years of experience to join our growing data team. The ideal candidate will be proficient in Databricks, Python, PySpark, and Azure, and have hands-on experience with Delta Live Tables. In this role, you will be responsible for developing, maintaining, and optimizing data pipelines and architectures to support advanced analytics and business intelligence initiatives. You will collaborate with cross-functional teams to build robust data infrastructure and enable data-driven decision-making. Key Responsibilities: .Design, develop, and manage scalable and efficient data pipelines using PySpark and Databricks .Build an...

Posted 3 months ago

AI Match Score
Apply

3.0 - 8.0 years

3 - 8 Lacs

Mumbai, Maharashtra, India

On-site

Responsibilities: Design and architect enterprise-scale data platforms, integrating diverse data sources and tools. Develop real-time and batch data pipelines to support analytics and machine learning. Define and enforce data governance strategies to ensure security, integrity, and compliance along with optimizing data pipelines for high performance, scalability, and cost efficiency in cloud environments. Implement solutions for real-time streaming data (Kafka, AWS Kinesis, Apache Flink) and adopt DevOps/DataOps best practices. Required Skills: Strong experience in designing scalable, distributed data systems and programming (Python, Scala, Java) with expertise in Apache Spark, Hadoop, Flink...

Posted 3 months ago

AI Match Score
Apply

2.0 - 6.0 years

2 - 6 Lacs

Bengaluru, Karnataka, India

On-site

Key Deliverables: Lead end-to-end development of scalable ML models and data solutions Implement MLOps workflows for versioning, deployment, and monitoring Design and conduct A/B testing and statistical analysis for insight generation Optimize large-scale data pipelines and ensure model performance in production Role Responsibilities: Collaborate with cross-functional teams to deliver high-impact AI projects Apply deep learning, reinforcement learning, or ensemble methods as needed Utilize cloud platforms and container tools for scalable model deployment Translate business problems into data-driven solutions with measurable outcomes

Posted 3 months ago

AI Match Score
Apply

6.0 - 8.0 years

6 - 8 Lacs

Gurgaon, Haryana, India

On-site

Design, develop, and maintain robust and scalable data pipelines using Python and SQL. Analyze and understand source systems and data flows to support accurate data ingestion. Ensure data quality, consistency, and governance across various systems and platforms. Optimize existing pipelines and queries for improved performance and scalability. Role Requirements and Qualifications: Strong proficiency in Python and SQL for data processing, scripting, and analytics. Proven experience in building and maintaining production-level data pipelines. Familiarity with Azure cloud services such as Azure Data Factory, Blob Storage, and SQL Database. Experience working with Databricks for big data processi...

Posted 3 months ago

AI Match Score
Apply

5.0 - 7.0 years

5 - 7 Lacs

Gurgaon, Haryana, India

On-site

Maintain, upgrade, and evolve data pipeline architectures to ensure optimal performance and scalability. Orchestrate the integration of new data sources into existing pipelines for further processing and analysis. Keep documentation up to date for pipelines and data feeds to facilitate smooth operations and collaboration within the team. Collaborate with cross-functional teams to understand data requirements and optimize pipeline performance accordingly. Troubleshoot and resolve any issues related to pipeline architecture and data processing. Role Requirements and Qualifications: Experience with cloud platforms for deployment and management of data pipelines. Familiarity with AWS / Azure for...

Posted 3 months ago

AI Match Score
Apply

4.0 - 9.0 years

14 - 22 Lacs

Pune

Work from Office

Responsibilities: * Design, develop, test and maintain scalable Python applications using Scrapy, Selenium and Requests. * Implement anti-bot systems and data pipeline solutions with Airflow and Kafka. Share CV on recruitment@fortitudecareer.com Flexi working Work from home

Posted 3 months ago

AI Match Score
Apply

5.0 - 8.0 years

5 - 8 Lacs

Hyderabad, Telangana, India

On-site

We are looking for an experienced Azure Data Engineer with strong expertise in Azure Databricks to join our data engineering team. Key Responsibilities: Design and build robust data pipelines and ETL/ELT workflows primarily using Azure Databricks and Azure Data Factory . Ingest, clean, transform, and process large datasets from a wide array of diverse sources, encompassing both structured and unstructured data. Implement Delta Lake solutions to enhance data reliability and performance, and optimize Spark jobs for efficiency and reliability. Integrate Azure Databricks seamlessly with other critical Azure services, including Azure Data Lake Storage, Azure Synapse Analytics, and Azure Event Hub...

Posted 3 months ago

AI Match Score
Apply

4.0 - 5.0 years

4 - 5 Lacs

Bengaluru, Karnataka, India

On-site

Must have 3+ years of IT experience, relevant experience of at least 1 year in Snowflake. In-depth understanding of Data Warehousing, ETL concepts and modeling structure principles Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe. Person should know Snowflake Architecture Experience in SQL is must. Expertise on engineering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance Analytics Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW Experience in scripting language such as Python or Scala is must Good experience on st...

Posted 3 months ago

AI Match Score
Apply

4.0 - 5.0 years

4 - 5 Lacs

Hyderabad, Telangana, India

On-site

Must have 3+ years of IT experience, relevant experience of at least 1 year in Snowflake. In-depth understanding of Data Warehousing, ETL concepts and modeling structure principles Experience working with Snowflake Functions, hands on exp with Snowflake utilities, stage and file upload features, time travel, fail safe. Person should know Snowflake Architecture Experience in SQL is must. Expertise on engineering platform components such as Data Pipelines, Data Orchestration, Data Quality, Data Governance Analytics Hands-on experience on implementing large-scale data intelligence solution around Snowflake DW Experience in scripting language such as Python or Scala is must Good experience on st...

Posted 3 months ago

AI Match Score
Apply

3.0 - 6.0 years

5 - 8 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Work from Office

We are seeking an experienced Azure Data Engineer with 36 years of experience for a 6-month remote contract. The candidate will be responsible for developing and supporting IT solutions using technologies like Azure Data Factory, Azure Databricks, Azure Synapse, Python, PySpark, Teradata, and Snowflake. The role involves designing ETL pipelines, developing Databricks notebooks, handling CI/CD pipelines via Azure DevOps, and working on data warehouse modeling and integration. Strong skills in SQL, data lake storage, and deployment/monitoring are required. Prior experience in Power BI and DP-203 certification is a plus. Location: Remote- Bengaluru,Hyderabad,Delhi / NCR,Chennai,Pune,Kolkata,Ahm...

Posted 3 months ago

AI Match Score
Apply

7.0 - 10.0 years

1 - 3 Lacs

Pune

Work from Office

Job Description: Migration Projects: Lead the migration of legacy applications from ASP.NET 4.7 to ASP.NET Core, .NET Framework 4.7 to .NET Core 8, and EF 6 to EF Core. Good to have: VB6.0, Crystal Reports, Classic ASP, Git and Jira Full Stack Development: Design, develop, and maintain both front-end and back-end components using .NET technologies. Blazor Development: Utilize Blazor for building interactive web UIs. Technical Leadership: Mentor and guide a team of developers, ensuring best practices and high standards of code quality. System Architecture: Collaborate with architects and other stakeholders to design scalable and robust system architectures. Code Reviews: Conduct code reviews ...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

2 - 12 Lacs

Hyderabad, Telangana, India

On-site

Responsibilities Collaborate with several stakeholders to deeply understand the needs of data practitioners to deliver at scale Lead Data Engineers to define, build and maintain Data Platform Work on building Data Lake in Azure Fabric processing data from multiple sources Migrating existing data store from Azure Synapse to Azure Fabric Implement data governance and access control Drive development effort End-to-End for on-time delivery of high-quality solutions that conform to requirements, conform to the architectural vision, and comply with all applicable standards. Present technical solutions, capabilities, considerations, and features in business terms. Effectively communicate status, is...

Posted 3 months ago

AI Match Score
Apply

2.0 - 3.0 years

8 - 10 Lacs

Bengaluru

Hybrid

Role & responsibilities Responsibilities: • Design, develop, and maintain Tableau dashboards and reports • Collaborate with business stakeholders to gather and understand requirements and translate the same into effective visualizations that provide actionable insights • Creating wireframes and beta dashboards with a focus on user experience, correctness, and visibility • Optimize Tableau dashboards for performance and usability • Develop and maintain documentation related to Tableau solutions Preferred candidate profile Skills & Requirement (Must Have): • 2-3 years of experience working in developing, publishing maintaining and managing Tableau dashboards • Working knowledge of Tableau admi...

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

20 - 25 Lacs

Bengaluru

Work from Office

The Platform Data Engineer will be responsible for designing and implementing robust data platform architectures, integrating diverse data technologies, and ensuring scalability, reliability, performance, and security across the platform. The role involves setting up and managing infrastructure for data pipelines, storage, and processing, developing internal tools to enhance platform usability, implementing monitoring and observability, collaborating with software engineering teams for seamless integration, and driving capacity planning and cost optimization initiatives.

Posted 3 months ago

AI Match Score
Apply

7.0 - 15.0 years

18 - 30 Lacs

Bengaluru, Karnataka, India

On-site

Java Backend MSB Engineer with Azure and Kafka Pan India Contract to hire with Cognizant Hybrid Mode (3 Days in Office) Need a minimum of 6.6 Years of experience Design develop and maintain robust backend systems using Java. Implement microservices architecture MSB principles. Build scalable and reliable data pipelines with Kafka. Deploy and manage applications on Azure cloud platform. Collaborate with cross functional teams to deliver high quality software.

Posted 3 months ago

AI Match Score
Apply

5.0 - 10.0 years

15 - 27 Lacs

Bengaluru

Work from Office

Develop digital reservoir modeling tools and Petrel plugins. Integrate geological and geophysical data, apply ML and data engineering, and support forecasting through advanced cloud-based workflows. Required Candidate profile Earth scientist with experience in Petrel, Python, and Ocean plugin development. Strong background in reservoir modeling, digital workflows, and cloud-based tools (Azure, Power BI).

Posted 3 months ago

AI Match Score
Apply

10.0 - 15.0 years

40 - 65 Lacs

Bengaluru

Work from Office

Design and lead scalable data architectures, cloud solutions, and analytics platforms using Azure. Drive data governance, pipeline optimization, and team leadership to enable business-aligned data strategies in the Oil & Gas sector Required Candidate profile Experienced data architect or leader with 10–15+ years in Azure, big data, and solution design. Strong in stakeholder management, data governance, and Oil & Gas analytics.

Posted 3 months ago

AI Match Score
Apply

10.0 - 12.0 years

7 - 10 Lacs

Bengaluru

Work from Office

Job Description Design, develop, and maintain Tableau dashboards and reports • Collaborate with business stakeholders to gather and understand requirements and translate the same into effective visualizations that provide actionable insights • Creating wireframes and beta dashboards with a focus on user experience, correctness, and visibility • Optimize Tableau dashboards for performance and usability • Develop and maintain documentation related to Tableau solutions Requirements 2-3 years of experience working in developing, publishing maintaining and managing Tableau dashboards • Working knowledge of Tableau administration/architecture • Creating wireframes and beta dashboards with a focus ...

Posted 3 months ago

AI Match Score
Apply

5.0 - 7.0 years

0 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology ...

Posted 3 months ago

AI Match Score
Apply

0.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About VOIS: VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group's partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutio...

Posted 3 months ago

AI Match Score
Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Senior Data Engineer (Remote, Contract 6 Months) Databricks, ADF, and PySpark. We are hiring a Senior Data Engineer for a 6-month remote contract position. The ideal candidate is highly skilled in building scalable data pipelines and working within the Azure cloud ecosystem, especially Databricks, ADF, and PySpark. You'll work closely with cross-functional teams to deliver enterprise-level data engineering solutions. #KeyResponsibilities Build scalable ETL pipelines and implement robust data solutions in Azure. Manage and orchestrate workflows using ADF, Databricks, ADLS Gen2, and Key Vaults. Design and maintain secure and efficient data lake architecture. Work with stakeholders to gather da...

Posted 3 months ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies