Posted:4 days ago|
Platform:
Remote
Full Time
Experience: 5–7 years
Location: Remote, India
We are seeking a data engineer to design, implement, and optimize cloud-based data pipelines using Microsoft Azure services, including ADF, Synapse, and ADLS.
Develop and maintain ETL/ELT pipelines using Azure Data Factory to ingest, transform, and load data from diverse sources (databases, APIs, flat files).
Design and manage data storage solutions using Azure Blob Storage and ADLS Gen2, ensuring proper partitioning, compression, and lifecycle policies for performance and cost efficiency.
Build and optimize data models and analytical queries in Azure Synapse Analytics, collaborating with data architects to support reporting and BI needs.
Ensure data quality, consistency, and reliability through validation, reconciliation, auditing, and monitoring frameworks.
Collaborate with data architects, BI developers, and business teams to define architecture, integration patterns, and performance tuning strategies.
Implement data security best practices, including encryption, access control, and role-based access management (RBAC).
Create and maintain documentation of data workflows, pipelines, and architecture to support knowledge transfer, compliance, and audits.
5+ years of hands-on experience in data engineering with a strong focus on Azure Data Factory, Azure Synapse Analytics, and ADLS Gen2.
Strong expertise in SQL, performance tuning, and query optimization for large-scale datasets.
Experience designing and managing data pipelines for structured and semi-structured data (CSV, JSON, Parquet, etc.).
Proficiency in data modeling (star schema, snowflake, normalized models) for analytics and BI use cases.
Practical knowledge of data validation, reconciliation frameworks, and monitoring pipelines to ensure data reliability.
Solid understanding of data security best practices (encryption, RBAC, compliance standards like GDPR).
Strong collaboration skills, with the ability to work closely with architects, BI teams, and business stakeholders.
Excellent skills in documentation and process standardization.
Experience with Python/Scala scripting for automation of ETL and data quality checks.
Exposure to Power BI or other BI tools (Tableau, Qlik) for understanding downstream analytics requirements.
Familiarity with CI/CD pipelines for data projects using Azure DevOps or Git-based workflows.
Knowledge of big data frameworks (Databricks, Spark) for large-scale transformations.
Hands-on experience with metadata management, data lineage tools, or governance frameworks.
Exposure to cloud cost optimization practices in Azure environments.
Understanding of API-based ingestion and event-driven architectures (Kafka, Event Hub).
Trantor
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Nowbengaluru
8.0 - 18.0 Lacs P.A.
bengaluru
10.0 - 20.0 Lacs P.A.
bengaluru
10.0 - 20.0 Lacs P.A.
gurgaon
5.925 - 9.0 Lacs P.A.
pune
5.75 - 9.145 Lacs P.A.
Salary: Not disclosed
bengaluru
5.0575 - 8.0 Lacs P.A.
bengaluru
Experience: Not specified
7.1 - 8.76 Lacs P.A.
chennai
Experience: Not specified
4.5 - 7.0 Lacs P.A.
bengaluru, karnataka, india
Salary: Not disclosed