Jobs
Interviews

319 Microsoft Fabric Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

ahmedabad, gujarat

On-site

You are a Senior Data Engineer with expertise in constructing scalable data pipelines utilizing Microsoft Fabric. Your primary responsibilities will involve developing and managing data pipelines through Microsoft Fabric Data Factory and OneLake. You will be tasked with designing and creating ingestion and transformation pipelines for both structured and unstructured data. It will be your responsibility to establish frameworks for metadata tagging, version control, and batch tracking to ensure the security, quality, and compliance of data pipelines. Additionally, you will play a crucial role in contributing to CI/CD integration, observability, and documentation. Collaboration with data architects and analysts will be essential to align with business requirements effectively. To qualify for this role, you should possess at least 6 years of experience in data engineering, with a minimum of 2 years of hands-on experience working on Microsoft Fabric or Azure Data services. Proficiency in tools like Azure Data Factory, Fabric, Databricks, or Synapse is required. Strong SQL and data processing skills (such as PySpark and Python) are essential. Previous experience with data cataloging, lineage, and governance frameworks will be beneficial for this position.,

Posted 2 months ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As a Data Engineer at Srijan, a Material company, you will play a crucial role in designing and developing scalable data pipelines within Microsoft Fabric. Your primary responsibilities will include optimizing data pipelines, collaborating with cross-functional teams, and ensuring documentation and knowledge sharing. You will work closely with the Data Architecture team to implement scalable and governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Your expertise in Microsoft Fabric will be utilized to build robust pipelines using both batch and real-time processing techniques, integrating with Azure Data Factory for seamless data movement. Continuous monitoring, enhancement, and optimization of Fabric pipelines, notebooks, and lakehouse artifacts will be essential to ensure performance, reliability, and cost-efficiency. You will collaborate with analysts, BI developers, and data scientists to deliver high-quality datasets and enable self-service analytics via Power BI datasets connected to Fabric Lakehouses. Maintaining up-to-date documentation for all data pipelines, semantic models, and data products, as well as sharing knowledge of Fabric best practices with junior team members, will be an integral part of your role. Your expertise in SQL, data modeling, and cloud architecture design will be crucial in designing modern data platforms using Microsoft Fabric, OneLake, and Synapse. To excel in this role, you should have at least 7+ years of experience in the Azure ecosystem, with relevant experience in Microsoft Fabric, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory, advanced data engineering skills, and strong collaboration and communication abilities are also required. Additionally, knowledge of Azure Databricks, Power BI integration, DevOps practices, and familiarity with OneLake, Delta Lake, and Lakehouse architecture will be advantageous. Join our awesome tribe at Srijan and leverage your expertise in Microsoft Fabric to build scalable solutions integrated with Business Intelligence layers, Azure Synapse, and other Microsoft data services.,

Posted 2 months ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

Are you passionate about building scalable BI solutions and leading innovation with Microsoft Fabric AmplifAI is looking for a Power BI Architect to lead our analytics strategy, mentor a growing team, and drive enterprise-wide reporting transformation. As a Power BI Architect at AmplifAI, you will play a crucial role in defining scalable data models, pipelines, and reporting structures using OneLake, Direct Lake, Dataflows Gen2. You will lead the architecture and migration from Power BI Pro/Premium to Microsoft Fabric, integrating structured and semi-structured data for unified analysis. Additionally, you will manage and mentor a team of Power BI Analysts, evangelize best practices across semantic modeling, performance tuning, and data governance, and drive governance and CI/CD using GitHub-based workflows. The ideal candidate for this role will have 8+ years of experience in Power BI and enterprise analytics, 5+ years of SQL expertise, and 3+ years in a leadership role. Proven experience with Microsoft Fabric, hands-on experience with GitHub workflows and version control, as well as strong communication, critical thinking, and problem-solving skills are essential for success in this position. At AmplifAI, you will have the opportunity to work on cutting-edge enterprise AI & BI solutions, be part of a diverse, inclusive, and globally distributed team, and shape the future of analytics in CX and performance management. If you are ready to lead data-driven transformation and make a significant impact, apply now to join AmplifAI as a Power BI Architect!,

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 16 Lacs

Vadodara

Work from Office

We are seeking an experienced Senior Data Engineer with minimum 5 years of hands-on experience to join our dynamic data team. The ideal candidate will have strong expertise in Microsoft Fabric, demonstrate readiness to adopt cutting-edge tools like SAP Data Sphere, and possess foundational AI knowledge to guide our data engineering initiatives. Key Roles and Responsibilities: Design, develop, and maintain scalable data pipelines and ETL/ELT processes using Microsoft Fabric tools such as Azure Data Factory (ADF) and Power BI. Work on large-scale data processing and analytics using PySpark. Evaluate and implement new data engineering tools like SAP Data Sphere through training or self-learning. Support business intelligence, analytics, and AI/ML initiatives by building robust data architectures. Apply AI techniques to automate workflows and collaborate with data scientists on machine learning projects. Mentor junior data engineers and lead data-related projects across departments. Coordinate with business teams, vendors, and technology partners for smooth project delivery. Create dashboards and reports using tools like Power BI or Tableau, ensuring data accuracy and accessibility. Support self-service analytics across business units and maintain consistency in all visualizations. Experience & Technical Skills 5+ years of professional experience in data engineering with expertise in Microsoft Fabric components Strong proficiency in PySpark for large-scale data processing and distributed computing (MANDATORY) Extensive experience with Azure Data Factory (ADF) for orchestrating complex data workflows (MANDATORY) Proficiency in SQL and Python for data processing and pipeline development Strong understanding of cloud data platforms, preferably Azure ecosystem Experience in data modelling , data warehousing , and modern data architecture patterns Interested candidates can share their updated profiles at "itcv@alembic.co.in"

Posted 2 months ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Noida

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Gurugram

Remote

Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.

Posted 2 months ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Noida

Work from Office

Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at our company, you will be responsible for handling ETL processes using PySpark, SQL, Microsoft Fabric, and other relevant technologies. Your primary role will involve collaborating with clients and stakeholders to understand data requirements, designing efficient data models, and optimizing existing data pipelines for performance and scalability. Ensuring data quality and integrity throughout the data pipeline will be a key aspect of your responsibilities. Additionally, you will be expected to document technical designs, processes, and procedures while staying updated on emerging technologies and best practices in data engineering. Building CICD pipelines using Github will also be part of your tasks. To qualify for this role, you should hold a Bachelor's degree in computer science, engineering, or a related field along with at least 3 years of experience in data engineering or a similar role. A strong understanding of ETL concepts and best practices is essential, as well as proficiency in Azure Synapse, Microsoft Fabric, and other data processing technologies. Experience with cloud-based data platforms such as Azure or AWS, knowledge of data warehousing concepts and methodologies, and proficiency in Python, PySpark, and SQL programming language for data manipulation and scripting are also required. Nice-to-have qualifications include experience with data lake concepts, familiarity with data visualization tools such as Power BI or Tableau, and certifications in relevant technologies like Microsoft Certified: Azure Data Engineer Associate. In addition to a challenging and rewarding work environment, we offer company benefits including group medical insurance, cab facility, meals/snacks, and a continuous learning program. Stratacent is a Global IT Consulting and Services firm with headquarters in Jersey City, NJ, global delivery centers in Pune and Gurugram, and offices in USA, London, Canada, and South Africa. Specializing in Financial Services, Insurance, Healthcare, and Life Sciences, we assist our customers in their transformation journey with services focusing on Information Security, Cloud Services, Data and AI, Automation, Application Development, and IT Operations. Learn more about us at http://stratacent.com.,

Posted 2 months ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Architect, implement, and optimize scalable data solutions. Required Candidate profile Work closely with data scientists, analysts, and business stakeholders to understand requirements and deliver actionable insights. Partner with cloud architects and DevOps teams

Posted 2 months ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Ahmedabad

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

As a Data Engineer at our company, you will be responsible for handling ETL processes using PySpark, SQL, Microsoft Fabric, and other relevant technologies. You will collaborate with clients and stakeholders to comprehend data requirements and devise efficient data models and solutions. Additionally, optimizing and tuning existing data pipelines for enhanced performance and scalability will be a crucial part of your role. Ensuring data quality and integrity throughout the data pipeline and documenting technical designs, processes, and procedures will also be part of your responsibilities. It is essential to stay updated on emerging technologies and best practices in data engineering and contribute to building CICD pipelines using Github. To qualify for this role, you should hold a Bachelor's degree in computer science, engineering, or a related field, along with a minimum of 3 years of experience in data engineering or a similar role. A strong understanding of ETL concepts and best practices is required, as well as proficiency in Azure Synapse, Microsoft Fabric, and other data processing technologies. Experience with cloud-based data platforms such as Azure or AWS, knowledge of data warehousing concepts and methodologies, and proficiency in Python, PySpark, and SQL programming languages for data manipulation and scripting are also essential. Desirable qualifications include experience with data lake concepts, familiarity with data visualization tools like Power BI or Tableau, and certifications in relevant technologies such as Microsoft Certified: Azure Data Engineer Associate. Our company offers various benefits including group medical insurance, cab facility, meals/snacks, and a continuous learning program. Stratacent is a Global IT Consulting and Services firm with headquarters in Jersey City, NJ, and global delivery centers in Pune and Gurugram, along with offices in the USA, London, Canada, and South Africa. Specializing in Financial Services, Insurance, Healthcare, and Life Sciences, we assist our customers in their transformation journey by providing services in Information Security, Cloud Services, Data and AI, Automation, Application Development, and IT Operations. For more information, you can visit our website at http://stratacent.com.,

Posted 2 months ago

Apply

3.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

Are you passionate about building scalable BI solutions and leading innovation with Microsoft Fabric AmplifAI is looking for a Power BI Architect to lead our analytics strategy, mentor a growing team, and drive enterprise-wide reporting transformation. The position is based in Hyderabad with work hours from 9 AM to 6 PM EST (US Time). As a Power BI Architect at AmplifAI, you will lead the architecture and migration from Power BI Pro/Premium to Microsoft Fabric. You will be responsible for defining scalable data models, pipelines, and reporting structures using OneLake, Direct Lake, Dataflows Gen2. Additionally, you will manage and mentor a team of Power BI Analysts and build engaging dashboards for platform insights, contact center KPIs, auto QA, and sentiment analysis. Integration of structured and semi-structured data for unified analysis, driving governance and CI/CD using GitHub-based workflows, and evangelizing best practices across semantic modeling, performance tuning, and data governance are key responsibilities. The ideal candidate should have 8+ years of experience in Power BI and enterprise analytics, 5+ years of SQL expertise, and at least 3 years in a leadership role. Proven experience with Microsoft Fabric, hands-on experience with GitHub workflows and version control, as well as strong communication, critical thinking, and problem-solving skills are essential. At AmplifAI, you will have the opportunity to work on cutting-edge enterprise AI & BI solutions, be part of a diverse, inclusive, and globally distributed team, and contribute to shaping the future of analytics in CX and performance management. If you are ready to lead data-driven transformation at AmplifAI, apply now!,

Posted 2 months ago

Apply

8.0 - 10.0 years

10 - 20 Lacs

Pune

Remote

Job Summary: We are seeking an experienced Azure Data Governance Specialist to design, implement, and manage data governance frameworks and infrastructure across Azure-based platforms. The ideal candidate will ensure enterprise data is high-quality, secure, compliant, and aligned with business and regulatory requirements. This role combines deep technical expertise in Azure with a strong understanding of data governance principles, MDM, and data quality management. Key Responsibilities: Data Governance & Compliance: Design and enforce data governance policies, standards, and frameworks aligned with enterprise objectives and compliance requirements (e.g., GDPR, HIPAA). Master Data Management (MDM): Implement and manage MDM strategies and solutions within the Azure ecosystem to ensure consistency, accuracy, and accountability of key business data. Azure Data Architecture: Develop and maintain scalable data architecture on Azure (e.g., Azure Data Lake, Synapse, Purview, Alation, Anomalo) to support governance needs. Tooling & Automation: Deploy and manage Azure-native data governance tools such as Azure Purview, Microsoft Fabric, and Data Factory to classify, catalog, and monitor data assets including third party tools like Alation. Data Quality (DQ): Lead and contribute to Data Quality forums, establish DQ metrics, and integrate DQ checks and dashboards within Azure platforms. Security & Access Management: Collaborate with security teams to implement data security measures, role-based access controls, and data encryption in accordance with Azure best practices. Technical Leadership: Guide teams in best practices for designing data pipelines, metadata management, and lineage tracking with Azure tooling. Continuous Improvement: Drive improvements in data management processes and tooling to enhance governance efficiency and compliance posture. Mentorship & Collaboration: Provide technical mentorship to data engineers and analysts, promoting data stewardship and governance awareness across the organization. Qualifications: Education: Bachelors degree in Computer Science, Information Systems, or a related field. Experience: 8+ years of experience in data infrastructure and governance, with 3+ years focused on Azure data services and tools. Technical Skills: Proficiency with data governance tools: Alation, Purview, Synapse, Data Factory, Azure SQL, etc. Strong understanding of data modeling (conceptual, logical, and physical models). Experience with programming languages such as Python, C#, or Java. In-depth knowledge of SQL and metadata management. Leadership: Proven experience leading or influencing cross-functional teams in data governance and architecture initiatives. Certifications (preferred): Azure Data Engineer Associate, Azure Solutions Architect Expert, or Azure Purview-related certifications.

Posted 2 months ago

Apply

10.0 - 18.0 years

2 - 3 Lacs

Hyderabad

Work from Office

Experience needed: 12-18 years Type: Full-Time Mode: WFO Shift: General Shift IST Location: Hyderabad NP: Immediate Joinee - 30 days Job Summary: We are looking for an experienced and visionary Data Architect - Azure Data & Analytics to lead the design and delivery of scalable, secure, and modern data platform solutions leveraging Microsoft Azure and Microsoft Fabric . This role requires deep technical expertise in the Azure Data & Analytics ecosystem, strong experience in designing cloud-native architectures, and a strategic mindset to modernize enterprise data platforms. Key Responsibilities: Architect and design Modern Data Platform solutions on Microsoft Azure, including ingestion, transformation, storage, and visualization layers. Lead implementation and integration of Microsoft Fabric , including OneLake, Direct Lake mode, and Fabric workloads (Data Engineering, Data Factory, Real-Time Analytics, Power BI). Define enterprise-level data architecture , including data lakehouse patterns, delta lakes, data marts, and semantic models. Collaborate with business stakeholders, data engineers, and BI teams to translate business needs into scalable cloud data solutions. Design solutions using Azure-native services such as Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage (Gen2), Azure SQL, and Azure Event Hubs. Establish best practices for data security, governance, DevOps, CI/CD pipelines, and cost optimization. Guide implementation teams on architectural decisions and technical best practices across the data lifecycle. Develop reference architectures and reusable frameworks for accelerating data platform implementations. Stay updated on Microsofts data platform roadmap and proactively identify opportunities to enhance data strategy. Assist in developing RFPs, architecture assessments, and solution proposals. Requirements Required Skills & Qualifications: Proven 12-18 years of experience which includes designing and implementing cloud-based modern data platforms on Microsoft Azure. Deep knowledge and understanding of Microsoft Fabric architecture , including Data Factory, Data Engineering, Synapse Real-Time Analytics, and Power BI integration. Expertise in Azure Data Services : Azure Synapse, Data Factory, Azure SQL, ADLS Gen2, Azure Functions, Azure Purview, Event Hubs, etc. Experience of data warehousing, lakehouse architectures, ETL/ELT , and data modeling. Experience in data governance, security, role-based access (Microsoft Entra/Azure AD) , and compliance frameworks. Strong leadership and communication skills to influence both technical and non-technical stakeholders. Familiarity with DevOps and infrastructure-as-code (e.g., ARM templates, Bicep, Terraform) is a plus. Preferred Qualifications: Microsoft Certified: Azure Solutions Architect Expert , Azure Data Engineer Associate , or Microsoft Fabric Certification . Experience with real-time data streaming , IoT , or machine learning pipelines in Azure. Familiarity with multi-cloud data strategies or hybrid deployments is an advantage.

Posted 2 months ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Ahmedabad

Work from Office

Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.

Posted 2 months ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune

Remote

Design, develop, and optimize data pipelines and ETL/ELT workflows using Microsoft Fabric, Azure Data Factory, and Azure Synapse Analytics. Implement Lakehouse and Warehouse architectures within Microsoft Fabric, supporting medallion (bronze-silver-gold) data layers. Collaborate with business and analytics teams to build scalable and reliable data models (star/snowflake) using Azure SQL, Power BI, and DAX. Utilize Azure Analysis Services, Power BI Semantic Models, and Microsoft Fabric Dataflows for analytics delivery. Very good hands-on experience with Python for data transformation and processing. Apply CI/CD best practices and manage code through Git version control. Ensure data security, lineage, and quality using data governance best practices and Microsoft Purview (if applicable). Troubleshoot and improve performance of existing data pipelines and models. Participate in code reviews, testing, and deployment activities. Communicate effectively with stakeholders across geographies and time zones. Required Skills: Strong knowledge of Azure Synapse Analytics, Azure Data Factory, Azure SQL, and Azure Analysis Services. Proficiency in Power BI and DAX for data visualization and analytics modeling. Strong Python skills for scripting and data manipulation. Experience in dimensional modeling, star/snowflake schemas, and Kimball methodologies. Familiarity with CI/CD pipelines, DevOps, and Git-based versioning. Understanding of data governance, data cataloging, and quality management practices. Excellent verbal and written communication skills.

Posted 2 months ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Bengaluru

Hybrid

Company: Connectiq.ai https://connectiq.ai/ Client: Aptean https://www.aptean.com Work Location: Rajajinagar, Bangalore Job Requisition: Data Engineer Key Responsibilities Design and Development: Architect, implement, and optimize scalable data solutions. Develop and maintain data pipelines, ETL/ELT processes, and workflows to ensure the seamless integration and transformation of data. Microsoft Fabrics, datalakes, databricks. Data Management: Manage and maintain data lakes, data warehouses, and real-time analytics systems. Ensure high data quality, integrity, and security across the organization. Performance Optimization: Monitor and enhance system performance, troubleshoot issues, and implement optimizations as needed. Leverage Microsoft Fabrics advanced analytics and AI capabilities for innovative data solutions. Best Practices & Leadership: Lead and mentor junior engineers to foster a culture of technical excellence. Stay updated with industry trends and best practices, especially in the Microsoft ecosystem

Posted 2 months ago

Apply

4.0 - 6.0 years

4 - 9 Lacs

Pune

Remote

Azure Data Engineer The Data Engineer builds and maintains data pipelines and infrastructure within Microsoft Fabric, enabling a seamless migration from Oracle/Informatica. This offshore role requires deep expertise in data engineering techniques to support enterprise data needs. The successful candidate will excel in creating scalable data solutions. Responsibilities Develop and maintain data pipelines for Microsoft Fabric, handling ETL processes from Oracle/Informatica. Ensure seamless data flow, integrity, and performance in the new platform. Collaborate with the Offshore Data Modeler and Onsite Data Modernization Architect to align with modernization goals. Optimize code and queries for performance using tools like PySpark and SQL. Conduct unit testing and debugging to ensure robust pipeline functionality. Report technical progress and issues to the Offshore Project Manager. Skills Bachelors degree in computer science, data engineering, or a related field. 4+ years of data engineering experience with PySpark, Python, and SQL. Strong knowledge of Microsoft Fabric, Azure services (e.g., Data Lake, Synapse), and ETL processes. Experience with code versioning (e.g., Git) and optimization techniques. Ability to refactor legacy code and write unit tests for reliability. Problem-solving skills with a focus on scalability and performance.

Posted 2 months ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Chennai

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

8.0 - 10.0 years

9 - 13 Lacs

Mumbai

Work from Office

Company Overview : Zorba Consulting India is a leading consultancy firm focused on delivering innovative solutions and strategies to enhance business performance. With a commitment to excellence, we prioritize collaboration, integrity, and customer-centric values in our operations. Our mission is to empower organizations by transforming data into actionable insights and enabling data-driven decision-making. We are dedicated to fostering a culture of continuous improvement and supporting our team members' professional development. Role Responsibilities : - Design and implement data pipelines using MS Fabric. - Develop data models to support business intelligence and analytics. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Collaborate with cross-functional teams to gather and define data requirements. - Ensure data quality and integrity in all data processes. - Implement best practices for data management, storage, and processing. - Conduct performance tuning for data storage and retrieval for enhanced efficiency. - Generate and maintain documentation for data architecture and data flow. - Participate in troubleshooting data-related issues and implement solutions. - Monitor and optimize cloud-based solutions for scalability and resource efficiency. - Evaluate emerging technologies and tools for potential incorporation in projects. - Assist in designing data governance frameworks and policies. - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Stay updated with industry trends and best practices in data engineering. Qualifications : - 8+ years of experience in data engineering roles. - Strong expertise in MS Fabric and related technologies. - Proficiency in SQL and relational database management systems. - Experience with data warehousing solutions and data modeling. - Hands-on experience in ETL tools and processes. - Knowledge of cloud computing platforms (Azure, AWS, GCP). - Familiarity with Python or similar programming languages. - Ability to communicate complex concepts clearly to non-technical stakeholders. - Experience in implementing data quality measures and data governance. - Strong problem-solving skills and attention to detail. - Ability to work independently in a remote environment. - Experience with data visualization tools is a plus. - Excellent analytical and organizational skills. - Bachelor's degree in Computer Science, Engineering, or related field. - Experience in Agile methodologies and project management.

Posted 2 months ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Bengaluru

Work from Office

About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Mumbai

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply

6.0 - 10.0 years

9 - 13 Lacs

Kolkata

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies