Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 8.0 years
9 - 13 Lacs
noida
Remote
About The Opportunity : Join a pioneering consulting firm in the Data Analytics and Cloud Solutions sector, where transformative data architectures empower global enterprises. We specialize in leveraging cutting-edge Snowflake technologies and innovative cloud solutions to drive real-time insights and business intelligence. This remote role, based in India, offers the opportunity to work on high-impact projects while collaborating with a diverse team of experts. Role & Responsibilities : - Design, implement, and optimize scalable data warehousing solutions using Snowflake Cortex. - Develop robust ETL pipelines that ensure data quality, reliability, and efficient integration across platforms....
Posted 2 weeks ago
5.0 - 8.0 years
9 - 13 Lacs
hyderabad
Remote
About The Opportunity : Join a pioneering consulting firm in the Data Analytics and Cloud Solutions sector, where transformative data architectures empower global enterprises. We specialize in leveraging cutting-edge Snowflake technologies and innovative cloud solutions to drive real-time insights and business intelligence. This remote role, based in India, offers the opportunity to work on high-impact projects while collaborating with a diverse team of experts. Role & Responsibilities : - Design, implement, and optimize scalable data warehousing solutions using Snowflake Cortex. - Develop robust ETL pipelines that ensure data quality, reliability, and efficient integration across platforms....
Posted 2 weeks ago
6.0 - 10.0 years
15 - 30 Lacs
pune, bengaluru, mumbai (all areas)
Hybrid
Job Description Lead Data Engineer Role Overview You will own the design, development, and optimization of large-scale data pipelines and platforms. This role needs someone who can combine strong engineering fundamentals with hands-on expertise in Python, distributed data processing (Spark), cloud architecture (AWS), and modern container/orchestration stacks (Docker, Kubernetes). You will guide the data engineering strategy, enforce best practices, and lead complex initiatives end-to-end. Key Responsibilities Architect, build, and maintain scalable batch and streaming data pipelines using Spark (PySpark preferred). Develop robust, modular, production-grade Python services for ingestion, tran...
Posted 2 weeks ago
10.0 - 12.0 years
4 - 8 Lacs
kolkata
Work from Office
We are seeking a highly skilled and motivated Senior MarkIT EDM Data Engineer to join our team. The ideal candidate will be a self-starter who can work independently and collaborate effectively with both internal and external stakeholders. This role requires a deep understanding of data engineering principles, strong technical skills, and the ability to drive projects to successful completion. Key Responsibilities : - Design, develop, and maintain data solutions using MarkIT EDM. - Collaborate with internal teams and external stakeholders to gather requirements and deliver data solutions. - Ensure data quality, integrity, and security across all data processes. - Optimize data workflows and ...
Posted 2 weeks ago
10.0 - 12.0 years
4 - 8 Lacs
ahmedabad
Work from Office
We are seeking a highly skilled and motivated Senior MarkIT EDM Data Engineer to join our team. The ideal candidate will be a self-starter who can work independently and collaborate effectively with both internal and external stakeholders. This role requires a deep understanding of data engineering principles, strong technical skills, and the ability to drive projects to successful completion. Key Responsibilities : - Design, develop, and maintain data solutions using MarkIT EDM. - Collaborate with internal teams and external stakeholders to gather requirements and deliver data solutions. - Ensure data quality, integrity, and security across all data processes. - Optimize data workflows and ...
Posted 2 weeks ago
10.0 - 12.0 years
4 - 8 Lacs
hyderabad
Work from Office
We are seeking a highly skilled and motivated Senior MarkIT EDM Data Engineer to join our team. The ideal candidate will be a self-starter who can work independently and collaborate effectively with both internal and external stakeholders. This role requires a deep understanding of data engineering principles, strong technical skills, and the ability to drive projects to successful completion. Key Responsibilities : - Design, develop, and maintain data solutions using MarkIT EDM. - Collaborate with internal teams and external stakeholders to gather requirements and deliver data solutions. - Ensure data quality, integrity, and security across all data processes. - Optimize data workflows and ...
Posted 2 weeks ago
10.0 - 12.0 years
4 - 8 Lacs
noida
Work from Office
We are seeking a highly skilled and motivated Senior MarkIT EDM Data Engineer to join our team. The ideal candidate will be a self-starter who can work independently and collaborate effectively with both internal and external stakeholders. This role requires a deep understanding of data engineering principles, strong technical skills, and the ability to drive projects to successful completion. Key Responsibilities : - Design, develop, and maintain data solutions using MarkIT EDM. - Collaborate with internal teams and external stakeholders to gather requirements and deliver data solutions. - Ensure data quality, integrity, and security across all data processes. - Optimize data workflows and ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
chennai
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
hyderabad
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
noida
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
kolkata
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
gurugram
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
pune
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
mumbai
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
bengaluru
Remote
Description: Senior Data Engineer (Spark & Lakehouse) Location: Remote, India (Preferred: Bangalore/Pune) Experience: 6+ Years Domain: Data Engineering / Big Data About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies ...
Posted 2 weeks ago
6.0 - 8.0 years
6 - 10 Lacs
ahmedabad
Remote
About the Role: We are seeking a Senior Data Engineer to drive the development of our next-generation Data Lakehouse architecture. You will be responsible for designing, building, and optimizing massive-scale, low-latency data pipelines that support real-time analytics and Machine Learning applications. Key Responsibilities: - Design and build highly optimized, production-grade ETL/ELT pipelines using Apache Spark (PySpark/Scala) to process petabytes of data. - Architect and manage the Data Lakehouse using open-source technologies like Delta Lake or Apache Hudi for ACID transactions and data quality. - Integrate and process real-time data streams using technologies such as Apache Kafka or Ki...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
hyderabad
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
chennai
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
gurugram
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
noida
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
kolkata
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
mumbai
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
ahmedabad
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
pune
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
5.0 - 7.0 years
3 - 7 Lacs
bengaluru
Remote
Role & Responsibilities : - Design, build, and maintain scalable batch and streaming ETL pipelines on Databricks using Delta Lake and Delta Live Tables (DLT). - Develop and optimize Spark/PySpark jobs for performance, cost-efficiency, and reliability; tune cluster sizing and autoscaling policies. - Implement data quality, observability, lineage and monitoring (alerts, dashboards, job health) to ensure production SLAs. - Collaborate with Data Engineers, Data Scientists and Architects to define schemas, partitioning strategies and data models that support analytics and ML use cases. - Build CI/CD and release automation for Databricks assets (notebooks, jobs, Delta tables); manage Git-based sou...
Posted 2 weeks ago
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
192783 Jobs | Dublin
Wipro
61786 Jobs | Bengaluru
EY
49321 Jobs | London
Accenture in India
40642 Jobs | Dublin 2
Turing
35027 Jobs | San Francisco
Uplers
31887 Jobs | Ahmedabad
IBM
29626 Jobs | Armonk
Capgemini
26439 Jobs | Paris,France
Accenture services Pvt Ltd
25841 Jobs |
Infosys
25077 Jobs | Bangalore,Karnataka