Posted:2 days ago|
Platform:
Work from Office
Full Time
- Design, develop, and operate scalable and maintainable data pipelines in the Azure Databricks environment
- Develop all technical artefacts as code, implemented in professional IDEs, with full version control and CI/CD automation
- Global delivery footprint; cross-functional data engineering support across Manufacturing & Engineering domains
- Collaboration with business stakeholders, functional IT partners, product owners, architects, ML/AI engineers, and Power BI developers
Main Tasks:
Design scalable batch and streaming pipelines in Azure Databricks using PySpark and/or Scala
Implement ingestion from structured and semi-structured sources (e.g., SAP, APIs, flat files)
Implement use-case driven dimensional models (star/snowflake schema) tailored to Manufacturing & Engineering (M&E) and Quality needs
Ensure compatibility with reporting tools (e.g., Power BI) via curated data marts and semantic models
Implement enterprise-level data warehouse models (domain-driven 3NF models) for Manufacturing & Engineering (M&E) and Quality data, closely aligned with data engineers for other business domains
Develop and apply master data management strategies (e.g., Slowly Changing Dimensions)
Develop automated data validation tests using frameworks
Monitor pipeline health, identify anomalies, and implement quality thresholds
Develop and structure pipelines using modular, reusable code in a professional IDE
Apply test-driven development (TDD) principles with automated unit, integration, and validation tests
Work closely with Product Owners to refine user stories and define acceptance criteria
Translate business requirements into data contracts and technical specifications
Document pipeline logic, data contracts, and technical decisions in markdown or auto-generated docs from code
Align designs with governance and metadata standards (e.g., Unity Catalog)
Profile and tune data transformation performance
Reduce job execution times and optimize cluster resource usage
Qualifications
Degree in Computer Science, Data Engineering, Information Systems, or related discipline.
Certifications in software development and data engineering (e.g., Databricks DE Associate, Azure Data Engineer, or relevant DevOps certifications).
3-6 years of hands-on experience in data engineering roles in enterprise environments. Demonstrated experience building production-grade codebases in IDEs, with test coverage and version control.
Proven experience in implementing complex data pipelines and contributing to full lifecycle data projects (development to deployment)
Experience in at least one business domain: Manufacturing & Engineering (M&E) and Quality or a comparable field
Experience working in international teams across multiple time zones and cultures, preferably with teams in India, Germany, and the Philippines.
Continental
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
bengaluru
15.0 - 16.0 Lacs P.A.
hyderabad, pune, bengaluru
14.0 - 20.0 Lacs P.A.
20.0 - 27.5 Lacs P.A.
bhubaneswar, hyderabad, bengaluru
30.0 - 40.0 Lacs P.A.
saket, delhi, india
Salary: Not disclosed
hyderābād
7.0 - 8.0 Lacs P.A.
hyderābād
5.25 - 6.445 Lacs P.A.
bengaluru
5.5 - 11.0 Lacs P.A.
bengaluru
Experience: Not specified
10.0 - 10.25 Lacs P.A.
bengaluru
Salary: Not disclosed