Data Engineer - Technical Lead
Location: Gurgaon (Hybrid/On-site)
Department: Data Engineering
Reports To: Project Manager / Client Stakeholders
Type: Full-Time
About the Client
Client is a leading data and AI/ML solutions provider, partnering with organizations across India and Australia to drive business transformation through data-driven insights. With a decade-long legacy and collaborations with technology leaders like AWS, Snowflake, Google Cloud Platform (GCP), and Databricks, BluePi delivers custom solutions that help enterprises achieve higher maturity and business outcomes.
Role Overview
As a Technical Lead - Data Engineer, you will play a pivotal role in designing, developing, and leading complex data projects on Google Cloud Platform and other modern data stacks. You will partner with cross-functional teams, drive architectural decisions, and ensure the delivery of scalable, high-performance data solutions aligned with business goals.
Key Responsibilities
- Lead the design, development, and implementation of robust data pipelines, data warehouses, and cloud-based architectures.
- Collaborate with business and technical teams to identify problems, define methodologies, and deliver end-to-end data solutions.
- Own project modules, ensuring complete accountability for scope, design, and delivery.
- Develop technical roadmaps and architectural vision for data projects, making critical decisions on technology selection, design patterns, and implementation.
- Implement and optimize data governance frameworks on GCP.
- Integrate GCP data services (BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, GenAI) with platforms like Snowflake.
- Write efficient, production-grade code in Python, SQL, and ETL/orchestration tools.
- Utilize containerized solutions (Google Kubernetes Engine) for scalable deployments.
- Apply expertise in PySpark (batch and real-time), Kafka, and advanced data querying for high-volume, distributed data environments.
- Monitor, optimize, and troubleshoot system performance, ensuring parallelism, concurrency, and resilience.
- Reduce job run-times and resource utilization through architecture optimization.
- Develop and optimize data warehouses, including schema design and data modeling.
- Mentor team members, contribute as an individual contributor, and ensure successful project delivery.
Required Skills & Qualifications
- Bachelor s or Master s degree in Computer Science, Engineering, or related field.
- Extensive hands-on experience with Google Cloud Platform data services (BigQuery, Dataflow, Dataproc, Cloud Composer, Vertex AI Studio, GenAI).
- Proven experience with Snowflake integration and data governance on GCP.
- Strong programming skills in Python, SQL, ETL, and orchestration tools.
- Proficiency in PySpark (batch and real-time), Kafka, and data querying tools.
- Experience with containerized solutions using Google Kubernetes Engine.
- Demonstrated ability to work with large, distributed datasets, optimizing for performance and scalability.
- Excellent communication skills for effective collaboration with internal teams and client stakeholders.
- Strong documentation skills, including the ability to articulate design and business objectives.
- Ability to balance short-term deliverables with long-term technical sustainability.
- Experience with AWS, Databricks, and other cloud data platforms.
- Prior leadership experience in data engineering teams.
- Exposure to AI/ML solution delivery in enterprise settings.
Why Join
- Opportunity to lead high-impact data projects for a reputed client in a fast-growing data consulting environment.
- Work with cutting-edge technologies and global enterprise clients.
- Collaborative, innovative, and growth-oriented culture.