Posted:3 days ago|
Platform:
Work from Office
Full Time
Job Duties: Wyndham Group of Hotels is advancing its modern data platform on Databricks (AWS) and is seeking a senior offshore data engineering lead to build high-performance, production-grade data pipelines. This role is not migration-focused initially but will evolve into one. The current mandate is to ingest semi-structured JSON data dropped via SFTP to S3, transform and validate it using Databricks, and deliver curated datasets into Amazon Redshift.
This is a player-coach role, ideal for a technically hands-on leader capable of delivering pipelines while mentoring junior engineers and ensuring best practices across the offshore team.
Initial Scope and Responsibilities:Develop high-performance PySpark pipelines in Databricks, orchestrated using Databricks Workflows.
Ingest JSON files from S3, flatten and transform them, and push curated outputs to Amazon Redshift.
Implement data quality and auditing frameworks, including schema validation, record counts, and exception tracking.
Leverage and extend established CI/CD processes using Databricks Asset Bundles (DAB), Git, and release pipelines.
Monitor pipeline performance and apply best practices to ensure cost-effective Databricks usage.
Serve as the offshore lead, participating in design discussions, daily standups, and technical governance with client teams.
Long-Term Responsibilities:Lead technical planning and execution for ETL migration from Informatica, Glue, and Lambda into Databricks.
Drive evolution of modular code patterns, pipeline standardization, and DQ reusability across multiple data domains.
Guide junior engineers and ensure high delivery velocity with sustained code quality.
Collaborate with client architects to align platform capabilities with business and analytics needs.
Minimum Skills Required: Core Technical:10+ years in data engineering, with 3+ years of recent hands-on Databricks (AWS) experience.
Proficiency in PySpark, Delta Lake, Databricks Workflows, and structured streaming (if required).
Strong experience working with semi-structured data (JSON), including flattening and normalization strategies.
Experience writing performant pipelines that push processed data to Amazon Redshift.
Deep understanding of Databricks Asset Bundles (DAB) and how they support CI/CD, environment promotion, and modular code packaging.
Familiarity with CI/CD processes in Databricks including Git integration, testing, and environment management.
Strong grasp of data quality, validation, audit logging, and exception handling strategies.
Soft Skills & Leadership:Proven experience as an offshore team lead in a global delivery model.
Ability to mentor, conduct code reviews, and enforce engineering standards.
Strong verbal and written communication skills to engage directly with client-side architects and product owners.
Comfortable working in Agile/Scrum delivery models.
Nice to Have:Familiarity with Informatica, AWS Glue, or Lambda-based ETL pipelines.
Understanding of cost-performance trade-offs in cloud-native data processing.
Exposure to the hospitality industry or high-volume transactional data pipelines."
NTT DATA, Inc.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
bengaluru
6.0 - 10.0 Lacs P.A.
18.0 - 22.5 Lacs P.A.
karnataka
Salary: Not disclosed
Salary: Not disclosed
bengaluru
8.0 - 12.0 Lacs P.A.
bengaluru
8.0 - 12.0 Lacs P.A.
Chennai, Tamil Nadu, India
5.0 - 10.0 Lacs P.A.
Bengaluru
8.0 - 12.0 Lacs P.A.
chennai, coimbatore, bengaluru
7.0 - 11.0 Lacs P.A.
bengaluru
7.0 - 11.0 Lacs P.A.