Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
8 - 10 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a proactive Data Engineer - Microsoft Fabric to lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. You will be responsible for developing and optimizing ETL/ELT processes within the Microsoft Azure ecosystem, ensuring data integrity, and collaborating with stakeholders to translate business needs into actionable data solutions. Roles & Responsibilities: Lead the design and implementation of Microsoft Fabric -centric data platforms and data warehouses. Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, utilizing relevant Fabric solutions. Ensure data integrity, quality, and governance throughout the Microsoft Fabric environment. Collaborate with stakeholders to translate business needs into actionable data solutions. Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills Required: Solid foundational knowledge in data warehousing, ETL/ELT processes , and data modeling. Expertise in designing and implementing scalable and efficient data pipelines using Data Factory in Fabric, PySpark notebooks, Spark SQL, and Python . Proficiency in SQL, Python , or other languages for data scripting, transformation, and automation. Experience ingesting data from SAP systems (SAP ECC/S4HANA/SAP BW) is a plus. The ability to develop dashboards or reports using tools like Power BI is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Posted 6 days ago
6.0 - 9.0 years
6 - 9 Lacs
Noida, Uttar Pradesh, India
On-site
We are seeking a proactive Data Engineer - Microsoft Fabric to lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. You will be responsible for developing and optimizing ETL/ELT processes within the Microsoft Azure ecosystem, ensuring data integrity, and collaborating with stakeholders to translate business needs into actionable data solutions. Roles & Responsibilities: Lead the design and implementation of Microsoft Fabric -centric data platforms and data warehouses. Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, utilizing relevant Fabric solutions. Ensure data integrity, quality, and governance throughout the Microsoft Fabric environment. Collaborate with stakeholders to translate business needs into actionable data solutions. Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills Required: Solid foundational knowledge in data warehousing, ETL/ELT processes , and data modeling. Expertise in designing and implementing scalable and efficient data pipelines using Data Factory in Fabric, PySpark notebooks, Spark SQL, and Python . Proficiency in SQL, Python , or other languages for data scripting, transformation, and automation. Experience ingesting data from SAP systems (SAP ECC/S4HANA/SAP BW) is a plus. The ability to develop dashboards or reports using tools like Power BI is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Posted 6 days ago
6.0 - 9.0 years
6 - 9 Lacs
Gurgaon, Haryana, India
On-site
We are seeking a proactive Data Engineer - Microsoft Fabric to lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. You will be responsible for developing and optimizing ETL/ELT processes within the Microsoft Azure ecosystem, ensuring data integrity, and collaborating with stakeholders to translate business needs into actionable data solutions. Roles & Responsibilities: Lead the design and implementation of Microsoft Fabric -centric data platforms and data warehouses. Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, utilizing relevant Fabric solutions. Ensure data integrity, quality, and governance throughout the Microsoft Fabric environment. Collaborate with stakeholders to translate business needs into actionable data solutions. Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills Required: Solid foundational knowledge in data warehousing, ETL/ELT processes , and data modeling. Expertise in designing and implementing scalable and efficient data pipelines using Data Factory in Fabric, PySpark notebooks, Spark SQL, and Python . Proficiency in SQL, Python , or other languages for data scripting, transformation, and automation. Experience ingesting data from SAP systems (SAP ECC/S4HANA/SAP BW) is a plus. The ability to develop dashboards or reports using tools like Power BI is a plus. QUALIFICATION: Bachelor's degree in Computer Science, Information Technology, or a related field, or equivalent practical experience.
Posted 6 days ago
10.0 - 15.0 years
12 - 17 Lacs
Chennai
Work from Office
Job Purpose: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. Requirements: We are looking for a Senior Data Engineer with extensive experience in developing ETL processes using PySpark Notebooks and Microsoft Fabric, and supporting existing legacy SQL Server environments. The ideal candidate will possess a strong background in Spark-based development, demonstrate a high proficiency in SQL, and be comfortable working independently, collaboratively within a team, or leading other developers when required, coupled with strong communication skills. The ideal candidate will possess Experience with Azure Data Services, including Azure Data Factory, Azure Synapse or similar tools,Experience of creating DAG's, implementing activities, and running Apache Airflow and Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps. The ideal candidate should have: Key Responsibilities: Design, develop, and maintain ETL Notebook orchestration pipelines using PySpark and Microsoft Fabric. Working with Apache Delta Lake tables, Change Data Feed (CDF), Lakehouses and custom libraries Collaborate with data scientists, analysts, and other stakeholders to understand data requirements and deliver efficient data solutions. Migrate and integrate data from legacy SQL Server environments into modern data platforms. Optimize data pipelines and workflows for scalability, efficiency, and reliability. Provide technical leadership and mentorship to junior developers and other team members. Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. Debugging of code, breaking down to test components, identify issues and resolve Develop, maintain, and enforce data engineering best practices, coding standards, and documentation. Conduct code reviews and provide constructive feedback to improve team productivity and code quality. Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications : Bachelor s or Master s degree in Computer Science, Data Science, Engineering, or a related field. 10+ years of experience in data engineering, with a strong focus on ETL development using PySpark or other Spark-based tools. Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. Experience with Microsoft Fabric or similar cloud-based data integration platforms is a plus. Strong knowledge of data warehousing concepts, ETL frameworks, and big data processing. Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is an advantage. Experience working with both structured and unstructured data sources. Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. Experience of creating DAG's, implementing activities, and running Apache Airflow Familiarity with DevOps practices, CI/CD pipelines and Azure DevOps.
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.