Home
Jobs

Posted:18 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Job Summary We are seeking a skilled and proactive Data Engineer with 3–4 years of hands-on experience in building robust data solutions on the Azure platform. The ideal candidate will have expertise in Azure Data Factory, Azure Data Lake, Azure Databricks, SQL, and Python, with additional experience in Power BI or Tableau considered a plus. You will be instrumental in developing and maintaining scalable data pipelines and ensuring efficient data integration and transformation to drive business intelligence and analytics initiatives. Key Responsibilities •Design, build, and manage data pipelines and data integration workflows using tools like Azure Data Factory and Azure Databricks. •Develop scalable ETL/ELT processes to ingest and transform structured and unstructured data from various internal and external sources. •Implement data warehousing solutions using Azure Synapse Analytics, Delta Lake, or other suitable Azure storage layers. •Collaborate with Data Scientists, Analysts, and Business Stakeholders to understand data requirements and deliver data models that support advanced analytics and reporting. •Optimize data workflows for performance and cost efficiency in a cloud-first environment. •Manage data in Azure Data Lake Gen2, including partitioning, schema management, and access control. •Ensure data quality, security, and compliance with organizational and industry standards. •Monitor and troubleshoot data pipeline issues; implement logging, alerting, and recovery mechanisms. •Maintain comprehensive documentation of pipelines, data flow diagrams, and design specifications. •Contribute to the development and enforcement of data governance and metadata management practices. Required Skills and Qualifications •3–4 years of data engineering experience, preferably in a cloud environment. •Strong proficiency in: •Python for data manipulation and pipeline orchestration •SQL for querying and transforming large datasets •Azure Data Factory, Azure Data Lake Storage Gen2, Azure Databricks, and Synapse Analytics •Experience in building Delta Lake tables, working with Apache Spark, and optimizing performance within Databricks notebooks or jobs. •Understanding of DevOps for Data using tools like Azure DevOps, Git, and CI/CD pipelines for deploying data workflows. •Familiarity with Power BI or Tableau for data validation or collaboration with reporting teams. •Experience with handling semi-structured data (JSON, Parquet, CSV, XML). •Knowledge of data governance, security (RBAC, ACLs), and compliance best practices. Preferred/Bonus Skills •Working knowledge of Microsoft Purview for data cataloging and governance. •Exposure to event-driven architecture using Event Hubs, Service Bus, or Azure Functions. •Experience with streaming data pipelines using Azure Stream Analytics or Structured Streaming in Databricks. •Familiarity with cost monitoring and performance tuning in cloud environments. Experience - 3 Years to 4 Years

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

PepHire
PepHire

4 Jobs

RecommendedJobs for You