Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 11.0 years
0 Lacs
haryana
On-site
As a Lead Data Engineer at Srijan Technologies PVT LTD, you will play a crucial role in designing and developing scalable data pipelines within Microsoft Fabric. Your responsibilities will include: - Designing and Developing Data Pipelines: Develop and optimize scalable data pipelines within Microsoft Fabric using fabric-based notebooks, Dataflows Gen2, Data Pipelines, and Lakehouse architecture. Build robust pipelines for batch and real-time processing. Integrate with Azure Data Factory or Fabric-native orchestration for seamless data movement. - Microsoft Fabric Architecture: Implement scalable, governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Ensure alignment with business needs while promoting performance, security, and cost-efficiency. - Data Pipeline Optimization: Continuously monitor, enhance, and optimize Fabric pipelines, notebooks, and lakehouse artifacts for performance, reliability, and cost. Implement best practices for managing large-scale datasets and transformations. - Collaboration with Cross-functional Teams: Work closely with analysts, BI developers, and data scientists to gather requirements and deliver high-quality datasets. Enable self-service analytics via certified and reusable Power BI datasets connected to Fabric Lakehouses. - Documentation and Knowledge Sharing: Maintain clear documentation for all data pipelines, semantic models, and data products. Share knowledge of Fabric best practices and mentor junior team members. - Microsoft Fabric Platform Expertise: Utilize your expertise in Microsoft Fabric, including Lakehouses, Notebooks, Data Pipelines, and Direct Lake, to build scalable solutions integrated with Business Intelligence layers and other Microsoft data services. Required Skills And Qualifications: - Experience in Microsoft Fabric / Azure Eco System: 7 years working with Azure ecosystem, Relevant experience in Microsoft Fabric, including Lakehouse, OneLake, Data Engineering, and Data Pipelines components. - Proficiency in Azure Data Factory and/or Dataflows Gen2 within Fabric. - Advanced Data Engineering Skills: Extensive experience in data ingestion, transformation, and ELT/ETL pipeline design. - Cloud Architecture Design: Experience designing modern data platforms using Microsoft Fabric, OneLake, and Synapse or equivalent. - Strong SQL and Data Modelling: Expertise in SQL and data modeling for data integration, reporting, and analytics. - Collaboration and Communication: Ability to work across business and technical teams. - Cost Optimization: Experience tuning pipelines and cloud resources for cost-performance balance. Preferred Skills: - Deep understanding of Azure, Microsoft Fabric ecosystem, including Power BI integration, Direct Lake, and Fabric-native security and governance. - Familiarity with OneLake, Delta Lake, and Lakehouse architecture. - Experience using Power BI with Fabric Lakehouses and DirectQuery/Direct Lake mode for enterprise reporting. - Working knowledge of PySpark, strong SQL, and Python scripting within Fabric or Databricks notebooks. - Understanding of Microsoft Purview, Unity Catalog, or Fabric-native governance tools. - Experience with DevOps practices for Fabric or Power BI. - Knowledge of Azure Databricks for building and optimizing Spark-based pipelines and Delta Lake models.,
Posted 1 day ago
8.0 - 13.0 years
20 - 35 Lacs
Pune
Hybrid
Job Description: JOB SUMMARY We are seeking an experienced Microsoft Fabric architect that brings technical expertise and architectural instincts to lead the design, development, and scalability of our secured enterprise-grade data ecosystem. This role is not a traditional BI/Data Engineering position we are looking for deep hands-on expertise in Fabric administration, CI/CD integration, and security/governance configuration in production environments. ESSENTIAL DUTIES Provide technical leadership on design and architectural decisions, data platform evolution and vendor/tool selection Leverage expertise in data Lakehouse on Microsoft Fabric, including optimal use of OneLake, Dataflows Gen2, Pipelines and Synapse Data Engineering Build and maintain scalable data pipelines to ingest, transform and curate data from a variety of structured and semi-structured sources Implement and enforce data modelling standards, including medallion architecture, Delta Lake and dimensional modelling best practices Collaborate with analysts and business users to deliver well-structured, trusted datasets for self-service reporting and analysis in Power BI Establish data engineering practices that ensure reliability, performance, governance and security Monitor and tune workloads within the Microsoft Fabric platform to ensure cost-effective and efficient operations EDUCATION / CERTIFICATION REQUIREMENTS Bachelor’s degree in computer science, data science, or a related field is required. A minimum of 3 years of experience in data engineering with at least 2 years in a cloud-native or modern data platform environment is required. Prior experience with a public accounting, financial or other professional services environment is preferred. SUCCESSFUL CHARACTERISTICS / SKILLS Extensive, hands-on expertise with Microsoft Fabric, including Dataflows Gen2, Pipelines, Synapse Data Engineering, Notebooks, and OneLake. Proven experience designing Lakehouse or data warehouse architecture, including data ingestion frameworks, staging layers and semantic models. Strong SQL and T-SQL skills and familiarity with Power Query (M) and Delta Lake formats. Understanding of data governance, data security, lineage and metadata management practices. Ability to lead technical decisions and set standards in the absence of a dedicated Data Architect. Strong communication skills with the ability to collaborate across technical and non-technical teams. Results driven; high integrity; ability to influence, negotiate and build relationships; superior communications skills; making complex decisions and leading team through complex challenges. Self-disciplined to work in a virtual, agile, globally sourced team. Strategic, out-of-the-box thinker and problem-solving experience to assess, analyze, troubleshoot, and resolve issues. Excellent analytical skills, extraordinary attention to detail, and ability to present recommendations to business teams based on trends, patterns, and modern best practices. Experience with Power BI datasets and semantic modelling is an asset. Familiarity with Microsoft Purview or similar governance tools is an asset. Working knowledge of Python, PySpark, or KQL is an asset. Experience and passion for technology and providing exceptional experience both internally for our employees and externally for clients and prospects. Strong ownership, bias to action, and know-how to succeed in ambiguity. Ability to deliver value consistently by motivating teams towards achieving goal Do share your resume with my email address: sachin.patil@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - sachin.patil@newvision-software.com
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your role will involve working on both batch and real-time ingestion and transformation, integrating with Azure Data Factory for smooth data flow, and collaborating with data architects to implement governed Lakehouse models in Microsoft Fabric. You will be expected to monitor and optimize the performance of data pipelines and notebooks in Microsoft Fabric, applying tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery. Collaboration with cross-functional teams, including BI developers, analysts, and data scientists, is essential to gather requirements and build high-quality datasets. Additionally, you will need to document pipeline logic, lakehouse architecture, and semantic layers clearly, following development standards and contributing to internal best practices for Microsoft Fabric-based solutions. To excel in this role, you should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric, Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required, along with a strong command of SQL, PySpark, and Python applied to data integration and analytical workloads. Experience in optimizing pipelines and managing compute resources for cost-effective data processing in Azure/Fabric is also crucial. Preferred skills for this role include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, and understanding of Microsoft Purview, Unity Catalog, or Fabric-native tools for metadata, lineage, and access control. Exposure to DevOps practices for Fabric and Power BI, as well as knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines, would be considered a plus. If you are passionate about developing efficient data solutions in a collaborative environment and have a strong background in data engineering within the Azure ecosystem, this role as a Senior Data Engineer at Srijan Technologies PVT LTD could be the perfect fit for you. Apply now to be a part of a dynamic team driving innovation in data architecture and analytics.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines and notebooks in Microsoft Fabric will be a key aspect of your role. You will apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Working closely with BI developers, analysts, and data scientists, you will gather requirements and build high-quality datasets to support self-service BI initiatives. Additionally, documenting pipeline logic, lakehouse architecture, and semantic layers clearly will be essential. Your experience with Lakehouses, Notebooks, Data Pipelines, and Direct Lake in Microsoft Fabric will be crucial in delivering reliable, secure, and efficient data solutions that integrate with Power BI, Azure Synapse, and other Microsoft services. You should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric components such as Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required. A strong command of SQL, PySpark, Python, and experience in optimising pipelines for cost-effective data processing in Azure/Fabric are necessary. Preferred skills include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, as well as understanding of Microsoft Purview or Unity Catalog. Exposure to DevOps practices for Fabric and Power BI, and knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines would be advantageous.,
Posted 1 month ago
7.0 - 11.0 years
0 Lacs
haryana
On-site
As a Data Engineer at Srijan, a Material company, you will play a crucial role in designing and developing scalable data pipelines within Microsoft Fabric. Your primary responsibilities will include optimizing data pipelines, collaborating with cross-functional teams, and ensuring documentation and knowledge sharing. You will work closely with the Data Architecture team to implement scalable and governed data architectures within OneLake and Microsoft Fabric's unified compute and storage platform. Your expertise in Microsoft Fabric will be utilized to build robust pipelines using both batch and real-time processing techniques, integrating with Azure Data Factory for seamless data movement. Continuous monitoring, enhancement, and optimization of Fabric pipelines, notebooks, and lakehouse artifacts will be essential to ensure performance, reliability, and cost-efficiency. You will collaborate with analysts, BI developers, and data scientists to deliver high-quality datasets and enable self-service analytics via Power BI datasets connected to Fabric Lakehouses. Maintaining up-to-date documentation for all data pipelines, semantic models, and data products, as well as sharing knowledge of Fabric best practices with junior team members, will be an integral part of your role. Your expertise in SQL, data modeling, and cloud architecture design will be crucial in designing modern data platforms using Microsoft Fabric, OneLake, and Synapse. To excel in this role, you should have at least 7+ years of experience in the Azure ecosystem, with relevant experience in Microsoft Fabric, Data Engineering, and Data Pipelines components. Proficiency in Azure Data Factory, advanced data engineering skills, and strong collaboration and communication abilities are also required. Additionally, knowledge of Azure Databricks, Power BI integration, DevOps practices, and familiarity with OneLake, Delta Lake, and Lakehouse architecture will be advantageous. Join our awesome tribe at Srijan and leverage your expertise in Microsoft Fabric to build scalable solutions integrated with Business Intelligence layers, Azure Synapse, and other Microsoft data services.,
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |