Role: Senior Data Platform Engineer
Department: Data Platform
Location: Bengaluru [Hybrid]
Experience: 5+ YOE
About the Role
We are looking for an experienced
Senior Data Platform Engineer
to architect, build, and scale our data platform using Databricks or equivalent framework as well as own the reporting and analytics module development. You will play a critical role in building robust pipelines, ensuring data reliability and governance, and enabling analytics / ML workflows and developing the next phase of reporting and analytics modules. Key Responsibilities
- Design, build, and maintain scalable, performant data pipelines (batch & streaming) on Databricks using Spark, Delta Lake, structured streaming, or equivalent frameworks.
- Implement and manage data architecture (ingestion, storage, transformations, modeling) according to best practices (e.g. bronze/silver/gold or other layering).
- Optimize data pipelines for performance, cost, and reliability; manage cluster configurations, job scheduling, resource utilization.
- Collaborate with data science, product, and engineering teams: provide data access, enable self-serve analytics/ML, ensure scalability & reliability for growing workloads.
- Provide technical leadership: participate in design reviews, mentoring junior/mid-level data engineers, evangelize data best practices across teams.
- Troubleshoot production issues, monitor data platform health, proactively identify and resolve bottlenecks or failures.
What We re Looking For (Requirements)
- 4-6+ years of experience in data engineering, data platform, or similar roles.
- Strong hands-on expertise with Databricks: Spark (PySpark / Scala), Delta Lake, job scheduling/workflows, data transformations.
- Proficiency in Python, SQL, and strong understanding of distributed data processing, storage formats, metadata management.
- Experience building/maintaining large-scale data pipelines (batch + streaming).
- Familiarity with cloud platforms (e.g. AWS, Azure, GCP): storage (S3/ADLS), compute, networking, permissions, and cloud-native data architecture.
- Understanding of data governance, security, access controls, and data compliance considerations.
- Ability to write clean, maintainable, well-documented code; follow best practices, testing, version control
Preferred / Nice-to-Have:
- Experience with data modeling (star schema, snowflake, dimensional modeling), data warehousing or lakehouse design.
- Experience with django and django rest framework or similar api development frameworks.
- Familiarity with data quality frameworks, observability, monitoring, lineage tracking.
- Exposure to ML/AI pipelines, feature stores, real-time analytics, or streaming-first architectures.
- Good communication skills, ability to work with cross-functional teams, mentor others, and handle high-ownership tasks.
What You Can Expect:
-
Ownership and autonomy:
You will own and drive execution end-to-end. -
Culture
: We value speed and expect you to move fast and stay updated with the latest technologies. We give significant ownership with no micro-management. People grow extremely fast at ClickPost when they focus on impact and take initiative. Our team thrives on transparency.
-
Compensation:
Competitive salary plus performance
bonus.
Benefits (for full-time roles
): - Health insurance
- Generous vacation policy,
- Learning and development budget,
- Team events and company offsites
- Company laptop and Devices
- Maternity and paternity benefits.
Inside Look: What Our Engineering Team
- Has BuiltScaling Smart: How We Slashed Cloud Costs
- with eBPFShipping Smarter: Building an AI NDR Agent fro
- m ScratchStartup at Scale: Our Real-Time Product Monitor ing Stack