Cloud Data Architect

10 years

0 Lacs

Bengaluru, Karnataka

Posted:1 month ago| Platform: Indeed logo

Apply

Skills Required

data technology code drive agility development analytics governance deployment azure spark power design support etl logic joins aggregations sql rest storage sharepoint processing kafka iot checks python tracking auditing pipeline management masking vault integration latency monitoring security architecture encryption devops github terraform strategies engineering documentation automation leadership communication collaboration certifications finance healthcare manufacturing

Work Mode

Work from Office

Job Type

Full Time

Job Description

Job Information Date Opened 05/14/2025 Job Type Full time Industry Technology State/Province Karnataka Zip/Postal Code 560038 City Bangalore Country India About Us At Innover, we endeavor to see our clients become connected, insight-driven businesses. Our integrated Digital Experiences, Data & Insights and Digital Operations studios help clients embrace digital transformation and drive unique outstanding experiences that apply to the entire customer lifecycle. Our connected studios work in tandem to reimagine the convergence of innovation, technology, people, and business agility to deliver impressive returns on investments. We help organizations capitalize on current trends and game-changing technologies molding them into future-ready enterprises. Take a look at how each of our studios represents deep pockets of expertise and delivers on the promise of data-driven, connected enterprises. Job Description Job Summary: We are seeking a highly experienced and driven Cloud Data Architect to architect and lead the development of an enterprise-grade data platform using Microsoft Fabric. This role involves overseeing the full data lifecycle — from ingestion and transformation to analytics, governance, and deployment — by integrating Azure-native technologies such as Data Factory, Synapse Pipelines, Delta Lake, Spark, and Power BI within the Microsoft Fabric ecosystem. Key Responsibilities: Design and implement scalable, secure, and modular Lakehouse architectures in OneLake using Delta Lake and Fabric-native services. Architect medallion-layered data models (Bronze, Silver, Gold) that support batch and real-time analytics. Support data modeling for analytics, including star/snowflake schemas and semantic models for Power BI. Design and build robust data pipelines using Azure Data Factory, Synapse Pipelines, and Spark Notebooks (via Synapse or Fabric). Implement complex ETL/ELT logic, including joins, aggregations, SCDs, window functions, and schema evolution. Integrate a broad range of data sources — structured, semi-structured, and unstructured — from SQL, REST APIs, Event Hubs, Blob Storage, SharePoint, and more. Enable real-time streaming and event-driven processing with Azure Event Hubs, Kafka, or IoT Hub. Establish automated data quality checks, validation rules, and anomaly detection using Azure Data Quality, custom Python scripts, or Data Activator. Implement robust data lineage tracking, auditing, and error-handling mechanisms across the pipeline lifecycle. Enforce enterprise data governance using Microsoft Purview, including metadata management, data masking, RBAC, Managed Identities, and Key Vault integration. Enable low-latency BI through DirectLake mode and semantic models in Power BI. Develop KQL-based Real-Time Analytics solutions within Fabric for instant insights and monitoring. Establish data security architecture including RBAC, data masking, encryption, and integration with Key Vault and Entra ID (Azure AD). Set up CI/CD pipelines using Azure DevOps, GitHub Actions, and infrastructure-as-code tools like ARM, Bicep, or Terraform for deploying data infrastructure and Fabric artifacts. Promote version control, environment promotion, and release management strategies in Fabric Workspaces. Collaborate closely with BI developers and application teams to support integrated and analytics-ready data delivery. Define and maintain enterprise-wide data architecture standards, principles, and best practices. Mentor junior engineers, lead code reviews, establish engineering best practices, and maintain high standards of documentation and automation. Qualifications: 10+ years of experience in data engineering, with at least 2+ years in technical lead or architect roles. Proven expertise in Azure Data Factory, Synapse Analytics, Delta Lake, Azure Data Lake, and Power BI. Strong proficiency in Python, Spark, and SQL. Deep experience with data integration (REST APIs, SharePoint, Blob), event-driven pipelines (Event Hubs, Kafka), and streaming ingestion. Hands-on experience with data quality frameworks, data validation logic, and automated anomaly detection. Knowledge of Fabric Workspace lifecycle management, including CI/CD deployment strategies. Strong grasp of data warehousing, datalakehouse patterns, and governance frameworks (Purview, RBAC, data masking). Familiarity with real-time analytics using KQL and Data Activator within Fabric. Excellent leadership, communication, and cross-functional collaboration skills. Good to Have: Microsoft certifications such as: Azure Solutions Architect Azure Data Engineer Associate Microsoft Fabric certifications Industry-specific experience in finance, healthcare, manufacturing, or retail.

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now

RecommendedJobs for You

Chennai, Tamil Nadu, India