Project Role :
Cloud Platform Engineer
Project Role Description :
Designs, builds, tests, and deploys cloud application solutions that integrate cloud and non-cloud infrastructure. Can deploy infrastructure and platform environments, creates a proof of architecture to test architecture viability, security and performance.
Must have skills :
Microsoft Fabric
Good to have skills :
NA
Minimum 5 Year(s) Of Experience Is Required
Educational Qualification :
15 years full time educationWe are seeking a skilled Microsoft Fabric Data Engineer to design, build, optimize, and maintain modern data solutions using Microsoft Fabric. The ideal candidate will have strong experience with data engineering, analytics workloads, cloud-based data platforms, and end-to-end data pipeline development. Min 6yrs exp - Microsoft Fabric Data Engineer Key Responsibilities 1. Data Architecture & Modeling
- Design and implement scalable data architectures using Microsoft Fabric components such as Lakehouse, Data Warehouse, OneLake, and KQL Databases.
- Create and optimize star schemas, data marts, semantic models, and medallion architectures.
- Manage and enforce data governance, security, and access control within Fabric workspaces. 2. ETL/ELT Pipeline Development
- Develop, orchestrate, and maintain data ingestion and transformation pipelines using Data Factory, Fabric Pipelines, and Dataflows Gen2.
- Build automated workflows for batch, streaming, or event-driven ingestion.
- Optimize pipeline performance and ensure reliability, scalability, and fault-tolerance. 3. Data Integration & Processing
- Work with structured and unstructured data from various enterprise systems, APIs, and external sources.
- Utilize Apache Spark within Fabric Notebooks for large-scale data processing.
- Implement Delta Lake best practices (Z-ordering, OPTIMIZE, VACUUM, etc.). 4. Analytics & Reporting Enablement
- Partner with BI analysts to create and optimize Power BI semantic models and direct lake mode datasets.
- Publish high-quality, certified data assets for business consumption.
- Ensure data quality, accuracy, and consistency across analytic layers. 5. Monitoring, Optimization & Operations
- Monitor Fabric workloads, storage utilization, capacity models, and performance.
- Implement logging, alerting, and automated testing for pipelines.
- Perform cost optimization for compute workloads and OneLake storage. 6. Collaboration & Stakeholder Engagement
- Work closely with data analysts, data scientists, and business stakeholders to understand data needs.
- Translate business requirements into scalable data solutions.
- Document workflows, architectures, and best practices. ________________________________________ Required Skills & Qualifications
- Bachelor’s degree in Computer Science, Information Systems, Engineering, or related field.
- Hands-on experience with Microsoft Fabric (Lakehouse, Data Factory, Pipelines, OneLake, Notebooks, Power BI).
- Strong proficiency with SQL, Python, Spark, and Delta Lake.
- Experience with Azure services (Azure Data Lake, Azure Synapse, Azure Data Factory, AAD).
- Solid understanding of ETL/ELT methodologies, data modeling, and data warehousing concepts.
- Knowledge of version control (Git) and CI/CD workflows.
- Excellent analytical, problem-solving, and communication skills. ________________________________________ Preferred Qualifications
- Fabric Analyst or Fabric Engineer Certification.
- Experience with MLOps or DataOps practices.
- Familiarity with DevOps tools (Azure DevOps, GitHub Actions).
- Experience with streaming technologies (Event Hubs, Kafka, Fabric Real-Time Analytics).