We are seeking a highly skilled Senior Data Engineer with in-depth expertise in Azure Stack to join our expanding team. As a Senior Data Engineer, you will be instrumental in designing and developing robust data pipelines, data lakes, and cloud-based solutions to support our business intelligence and analytics initiatives. You will work in a hybrid cloud environment to manage data services, ensuring high scalability, security, and performance for data-driven decision-making. Key Responsibilities: Data Engineering: Design and implement scalable data pipelines, ETL workflows, and processing frameworks within Azure Stack and hybrid cloud environments. Cloud Architecture: Leverage the capabilities of Azure Stack , Azure Data Factory , Azure Synapse Analytics , and other Azure cloud services to integrate, manage, and optimize enterprise data solutions. Data Integration & Storage: Build and optimize data lakes, data warehouses, and cloud-based storage systems on Azure, integrating structured and unstructured data from multiple sources. Performance Optimization: Continuously improve data pipelines for enhanced performance, scalability, and cost efficiency. Implement automation and monitoring solutions to ensure smooth operation. Cross-Functional Collaboration: Work closely with product managers, data scientists, and business intelligence teams to implement data models, data visualizations, and analytics solutions. Mentorship: Provide mentorship and technical leadership to junior engineers, helping them improve their skills in cloud data technologies, coding standards, and engineering best practices. Cloud Migration: Assist in the migration of on-premises data infrastructure to Azure Stack , working closely with infrastructure teams to ensure a seamless transition. Required Skills & Qualifications: Experience: 3-8 years of experience in data engineering, with at least 2+ years of experience working specifically with Azure Stack and Azure cloud services in a hybrid environment. Azure Stack Expertise: Proficient in Azure Stack Hub , Azure Data Factory , Azure SQL Database , Azure Synapse Analytics , Azure Databricks , and Azure Blob Storage . Programming Skills: Proficiency in SQL , Python , or similar scripting languages for building data pipelines and processing large datasets. ETL Tools: Experience with building and optimizing ETL workflows using Azure Data Factory , Apache Airflow , or other similar tools. Data Integration: Experience working with data integration tools, including both structured and unstructured data sources, and technologies such as Kafka , Spark , Hadoop , or others. Data Warehousing & Storage: Strong understanding of data warehousing concepts (e.g., star/snowflake schemas) and expertise with Azure Data Lakes and SQL/NoSQL storage solutions . Big Data Technologies: Familiarity with big data technologies like Apache Hadoop , Apache Spark , Databricks , and Kafka is a plus. Soft Skills: Excellent analytical and problem-solving abilities. Strong communication skills to articulate technical details to non-technical stakeholders. Ability to work collaboratively in a distributed team environment. Self-motivated, proactive, and able to work under minimal supervision. Strong attention to detail, ensuring data quality and security.