Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 11.0 years
8 - 12 Lacs
Chennai
Work from Office
Skills : Azure/AWS, Synapse, Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehousing Experience : 6+ Years Locations : Chennai, Bangalore, Pune, Coimbatore Work from Office
Posted 3 months ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines a...
Posted 3 months ago
4.0 - 6.0 years
12 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Data Engineer (Microsoft Fabric & Lakehouse), PySpark Data Lakehouse architectures,cloud platforms (Azure, AWS), on-prem databases, SaaS platforms Salesforce, Workday), REST/OpenAPI-based APIs,data governance, lineage, RBAC principles,PySpark, SQL
Posted 3 months ago
5.0 - 15.0 years
0 Lacs
maharashtra
On-site
En Derevo empoderamos a las empresas y las personas, liberando el valor de los datos en las organizaciones. Con ms de 15 aos de experiencia, diseamos soluciones de datos e IA de punta a punta, desde la integracin en arquitecturas modernas hasta la implementacin de modelos inteligentes en procesos clave del negocio. Buscamos tu talento como Data Engineer (MS Fabric)!! Es importante que vivas en Mxico o Colombia. Como Data Engineer en Derevo, tu misin ser clave para crear e implementar arquitecturas modernas de datos con alta calidad, impulsando soluciones analticas basadas en tecnologas de Big Data. Disears, mantendrs y optimizars sistemas de multiprocesamiento paralelo, aplicando las mejores...
Posted 3 months ago
5.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Remote
Build & manage infra for data strage,process & Analysis Exp in AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 5 to 6 Years Data pipeline development & orchestration using AWS Glue Leadership experience UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 3 months ago
3.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Work from Office
AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Data pipeline development & orchestration using AWS Glue Strong programming skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 3 to 5 Years Client-facing and team leadership experience Candidates have to work with UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 3 months ago
9.0 - 14.0 years
25 - 40 Lacs
Chennai
Work from Office
Role & responsibilities We are seeking a Data Modeller with over 12+ years of progressive experience in information technology, including a minimum of 4 years in a Data migration projects to cloud(refactor, replatform etc) and 2 years exposer to GCP. Preferred candidate profile In-depth knowledge of Data Warehousing/Lakehouse architectures, Master Data Management, Data Quality Management, Data Integration, and Data Warehouse architecture. Work with the business intelligence team to gather requirements for the database design and model Understand current on-premise DB model and refactoring to Google cloud for better performance. Knowledge of ER modeling, big data, enterprise data, and physica...
Posted 3 months ago
8.0 - 13.0 years
8 - 17 Lacs
Chennai
Remote
MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience
Posted 4 months ago
8.0 - 13.0 years
8 - 17 Lacs
Chennai
Remote
MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience
Posted 4 months ago
8.0 - 13.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 4 months ago
8.0 - 13.0 years
6 - 11 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 4 months ago
8.0 - 13.0 years
6 - 11 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 4 months ago
8.0 - 12.0 years
20 - 25 Lacs
Chennai
Remote
Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). • Primary language: Python; strong skills in Spark SQ
Posted 4 months ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Work from Office
Dear Candidate, We are excited to share an opportunity at Avigna.AI for the position of Data Engineer . We're looking for professionals with strong data engineering experience who can contribute to building scalable, intelligent data solutions and have a passion for solving complex problems. Position Details: Role: Data Engineer Location: Pune, Baner (Work from Office) Experience: 7+ years Working Days: Monday to Friday (9:00 AM 6:00 PM) Education: Bachelors or Master’s in Computer Science, Engineering, Mathematics, or related field Company Website: www.avigna.ai LinkedIn: Avigna.AI Key Responsibilities: Design and develop robust data pipelines for large-scale data ingestion, transformation,...
Posted 4 months ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Hands on experience in Test Automation tools such as Playwright, Protractor and must have TypeScript and JavaScript knowledge. Experience in Playwright is must. Lead testing efforts, mentor team, and ensure quality assurance. Candidate should have good interpersonal and communication skills with Good testing knowledgeBankingTypeScript,
Posted 4 months ago
3.0 - 7.0 years
2 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Databricks, Python, Pyspark, B4HANA, SQLhands on experience, Lakehouse knowledge, CI&CD Tasks to ingest data from a different internal source system via Kafka connector (will be built by another team) into bronze, clean data and implement data quality checks (a.o. reconciliation, business rules). Code business rules in an efficient and effective way with good coding principles that other developers in the team easily understand and can built upon. Make data available on a regular frequency without human intervention for a consumption layer according to business requirements and with 99% availability and trustworthiness Drive functional and technical discussions independently with stakeholder...
Posted 4 months ago
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 4 months ago
4.0 - 6.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :4 to 6 yrs Work Location :Bangalore/Pune/Hyderabad/Chennai Required Skills, Pyspark AWS Glue Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 4 months ago
8.0 - 13.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :8 to 15 yrs Work Location :Bangalore Required Skills, Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 4 months ago
6.0 - 11.0 years
25 - 30 Lacs
Bengaluru
Hybrid
Mandatory Skills : Data engineer , AWS Athena, AWS Glue,Redshift,Datalake,Lakehouse,Python,SQL Server Must Have Experience: 6+ years of hands-on data engineering experience Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB Building batch and real-time data pipelines Python, SQL coding for data processing and analysis Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks Design and Develop ETL frameworks Nice-to-Have Experience : ETL development using tools like Informatica, Talend, Fivetran Creating reusable data sources and dashboards for self-service analytics Experience using Databricks for Spark workloads or Snowflake Working...
Posted 5 months ago
4.0 - 8.0 years
5 - 12 Lacs
Bengaluru
Work from Office
If interested apply here - https://forms.gle/sBcZaUXpkttdrTtH9 Key Responsibilities Work with Product Owners and various stakeholders throughout the organization to identify opportunities for leveraging company data to drive business solutions and design the scale out architecture for data platform to meet the requirements of the proposed solution. Mine and analyze data from company databases to drive optimization and improvement of product development, marketing techniques, and business strategies. Play an active role in leading team meetings and workshops with clients. Help the Data Engineering team produce high-quality code that allows us to put solutions into production Create and own th...
Posted 5 months ago
11.0 - 19.0 years
20 - 35 Lacs
faridabad
Remote
We are seeking an experienced and highly skilled Senior Data Engineer to drive data-driven decision-making and innovation. In this role, you will leverage your expertise in advanced analytics, machine learning, and big data technologies to solve complex business challenges. You will be responsible for designing predictive models, building scalable data pipelines, and uncovering actionable insights from structured and unstructured datasets. Collaborating with cross-functional teams, your work will empower strategic decision-making and foster a data-driven culture across the organization. Role & responsibilities 1 Position Overview: We are seeking an experienced and highly skilled Senior Data ...
Posted Date not available
8.0 - 13.0 years
45 - 50 Lacs
hyderabad, pune, delhi / ncr
Hybrid
Design and develop integration of Snowflake with a cloud platform and platform enhancements, integrations, and performance optimisation. Work on data ingestion, transformation, cataloguing, and lineage tracking, Develop and architect ETL workflows. Required Candidate profile 5 years in developing and scaling data platforms centered around Snowflake, with Azure. Understanding of modern data architectures such as Data Mesh, Lakehouse, and ELT. Familiarity with DCAM
Posted Date not available
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
128529 Jobs | Dublin
Wipro
41046 Jobs | Bengaluru
EY
33823 Jobs | London
Accenture in India
30977 Jobs | Dublin 2
Uplers
24932 Jobs | Ahmedabad
Turing
23421 Jobs | San Francisco
IBM
20492 Jobs | Armonk
Infosys
19613 Jobs | Bangalore,Karnataka
Capgemini
19528 Jobs | Paris,France
Accenture services Pvt Ltd
19518 Jobs |