Posted:5 days ago|
Platform:
Hybrid
Full Time
We are seeking an experienced(8-12 years) , Data Engineer to design, build, and optimize scalable data platforms and pipelines. This role is critical for establishing robust data architecture, ensuring high data quality, and enabling advanced analytics. You will collaborate with architects, data scientists, and business stakeholders to deliver secure, efficient, and future-ready data solutions.
Data Architecture & Modeling
Design and implement data models (conceptual, logical, physical) for structured and semi-structured data.
Contribute to data architecture decisions, ensuring scalability, performance, and compliance.
Define and enforce data governance, lineage, and quality frameworks.
Data Engineering
Develop and maintain ETL/ELT pipelines using PySpark, Python, SQL, and Azure Data Factory.
Optimize data storage and retrieval in ADLS, Delta Lake, and Azure SQL for performance and cost efficiency.
Implement data partitioning, indexing, and caching strategies for large-scale datasets.
Test Automation & Data Quality
Build automated testing frameworks for data pipelines to validate transformations, schema changes, and data integrity.
Implement unit tests, integration tests, and regression tests for data workflows.
Establish data profiling, validation, and cleansing processes to maintain high-quality datasets.
Monitor and troubleshoot data pipelines for reliability and performance using automated alerts.
Cloud & Platform Expertise
Build and manage data solutions on Azure Cloud, leveraging Databricks for big data processing.
Integrate data from multiple sources including SAP HANA, SAP BW, and external systems into cloud platforms.
Work with Snowflake for advanced analytics and cross-platform data sharing.
DevOps & CI/CD
Implement CI/CD pipelines using Azure DevOps and GitHub for automated deployments.
Ensure version control and collaborative development practices across teams.
Agile Delivery
Participate in Agile ceremonies, contributing to sprint planning, backlog refinement, and continuous improvement.
Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders.
AI/ML & MLOps (Nice to Have)
Collaborate with data scientists to prepare and deliver high-quality datasets for machine learning models.
Implement feature engineering pipelines and manage model training and deployment workflows on Databricks.
Support MLOps practices including model versioning, monitoring, and automated retraining.
Optimize ML workflows for scalability and cost efficiency in Azure Databricks.
Primary: SQL, PySpark, Python, Databricks, Azure Cloud, Azure Data Factory, ADLS, Delta Lake, Azure SQL
Secondary (Preferred): SAP HANA, SAP BW, SAP ABAP, Snowflake
Tools: GitHub, Azure DevOps, CI/CD
Methodology: Agile
Healthcare & Pharmacy experience strongly preferred.
Knowledge of pricing models in healthcare and pharmacy domains is a plus.
Thanks & Regards
Harini - Executive Recruitment
Wilco Source
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now
delhi
Salary: Not disclosed
delhi, delhi, india
Salary: Not disclosed
delhi, delhi
Salary: Not disclosed
pune, maharashtra
Salary: Not disclosed
hyderabad, chennai, bengaluru
20.0 - 30.0 Lacs P.A.
bengaluru
72.0 - 78.0 Lacs P.A.
gurugram, haryana, india
Experience: Not specified
Salary: Not disclosed
kolkata, new delhi, bengaluru
0.5 - 3.0 Lacs P.A.
hyderabad, telangana, india
Salary: Not disclosed
chennai, bengaluru
15.0 - 30.0 Lacs P.A.