Posted:2 weeks ago|
Platform:
On-site
Full Time
*About the Opportunity*
A high-growth enterprise in the Cloud Data & Analytics (SaaS) sector delivering scalable analytics, reporting, and real-time insights to global customers. The team builds robust data platforms that enable product analytics, BI, ML, and operational reporting across high-throughput data flows.
Location: Mumbai, Maharashtra, India — On-site role.
## Role & Responsibilities
- Design, build, and maintain end-to-end data pipelines for batch and streaming use cases using Python, Spark, and Airflow to deliver reliable, low-latency data to downstream systems.
- Develop and optimise ETL/ELT jobs and Spark applications to improve throughput, reduce cost, and meet SLAs for data freshness.
- Implement robust streaming solutions using Kafka (or equivalent) and ensure exactly-once semantics, backpressure handling, and schema evolution support.
- Define and enforce data modelling, partitioning, and storage strategies across data lake and warehouse (e.g., S3 + Snowflake) to enable performant analytics and ML training.
- Build observability, alerting, and automated recovery for pipelines using CI/CD practices, logging, metrics, and runbook-driven incident response.
- Collaborate with Data Scientists, Product Analytics, and Engineering teams to translate requirements into production-grade data services and mentor junior engineers on best practices.
## Skills & Qualifications
*Must-Have*
- Proven production experience building data pipelines with Python
- Strong SQL skills for analytics, data modelling, and performance tuning
- Experience with Apache Spark for large-scale batch/stream processing
- Hands-on experience with Apache Airflow for orchestration
- Experience implementing streaming architectures with Apache Kafka
- Practical knowledge of AWS cloud data services (S3, EMR, Glue, or similar)
*Preferred*
- Experience with Snowflake or other cloud data warehouses
- Familiarity with dbt for transformation and data testing
- Experience with containerisation/automation (Kubernetes, CI/CD pipelines)
## Benefits & Culture Highlights
- Collaborative, engineering-led culture with strong emphasis on code quality, testing, and observability.
- On-site Mumbai team environment that accelerates cross-functional collaboration and career growth.
- Opportunities for upskilling (conferences, certifications) and working on large-scale, high-impact data products.
We are looking for a hands-on Senior Data Engineer who thrives in building scalable data platforms, enjoys problem-solving across the full data lifecycle, and is eager to drive technical excellence in an on-site Mumbai team.
Job Type: Full-time
Pay: ₹1,500,000.00 - ₹1,800,000.00 per year
Work Location: In person
Speak with the employer
+91 9008078505
FullThrottle Labs Pvt Ltd
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now20.0 - 25.0 Lacs P.A.
25.0 - 40.0 Lacs P.A.
chennai
7.0 - 12.0 Lacs P.A.
Salary: Not disclosed
8.5 - 18.0 Lacs P.A.
Salary: Not disclosed
chennai, bengaluru
18.0 - 27.5 Lacs P.A.
Salary: Not disclosed
mohali district, india
Salary: Not disclosed
delhi, delhi
Experience: Not specified
Salary: Not disclosed