Experience
: 5.00 + years
Salary
: Confidential (based on experience)
Shift
: (GMT+05:30) Asia/Kolkata (IST)
Opportunity Type
: Remote
Placement Type
: Full time Permanent Position
(*Note: This is a requirement for one of Uplers' client - 1digitalstack.ai)What do you need for this opportunity?Must have skills required:Java, Python, Spark, Kafka, Flink, Apache Beam, Trino, SQL1digitalstack.ai is Looking for:
Role
- Senior Data Engineer
Experience
- 5-7 Years
Location
- Remote (India)
About 1DigitalStack.ai
1DigitalStack.ai combines AI and deep eCommerce data to help global brands grow faster on onlinemarketplaces. Our platforms deliver advanced analytics, actionable intelligence, and mediaautomation — enabling brands to optimize visibility, efficiency, and sales performance at scale.We partner with India’s top consumer companies — Unilever, Marico, Coca-Cola, Tata Consumer, Dabur,and Unicharm — across 125+ marketplaces globally.Backed by leading venture investors and powered by a 220+ member team, we’re in our $5–10Mgrowth journey, scaling rapidly across categories and geographies to redefine how brands win ondigital shelves.🔗 Check out more at www.1digitalstack.ai
About Role
This is a high-impact, hands-on engineering role owning the core data systems that power ouranalytics, AI, and automation stack.You’ll work closely with the CTO and Engineering Leads and independently manage large,high-throughput data pipelines that process millions of events.
Responsibilities :
- Build and maintain high-throughput, real-time data pipelines using Kafka/Pulsar with Spark,
Flink, and distributed compute engines.
- Design fault-tolerant systems with zero-data-loss principles — checkpointing, replay logic,
DLQs, deduplication, and back-pressure handling.
- Implement data observability — quality checks, SLA alerts, anomaly detection, lineage, and
metadata insights.
- Design and manage Iceberg-based lakehouse tables (Polaris/Gravitino catalogs, schema
evolution, compaction).
- Build fast OLAP layers using ClickHouse / StarRocks.
- Model data across bronze → silver → gold layers for downstream teams.
- Migrate and modernize legacy pipelines into scalable, distributed workflows.
- Orchestrate ETL workloads using Airflow, DBT, Dagster, SQLMesh.
- Optimize SQL transformations and distributed execution across Trino/Spark.
- Ensure strict security and governance across all data layers — access control, encryption,
auditability.
- Collaborate with backend, analytics, and platform teams for seamless data delivery.
Requirements
Core Technical Skills
- Extremely strong SQL — window functions, query planning, optimization.
- High comfort working with distributed & parallel workloads.
- Hands-on experience with some-many of these technologies : Apache Spark, Apache Flink,
Trino, Apache Kafka, Apache Pulsar, Apache Beam
- Advanced experience in Python (preferred) or Java (strong fundamentals).
- Strong understanding of Parquet, Apache Iceberg, and Iceberg REST catalogs (Polaris /
Gravitino).
- Experience with OLAP databases — ClickHouse / StarRocks.
- Experience with semantic layers — Cube.js or similar.
- Strong experience building pipelines with Airflow, DBT, Dagster, SQLMesh.
Foundational Strengths
- Solid understanding of data structures & algorithms — sorting, searching, memory models.
- Strong grasp of OLTP vs OLAP, indexing, query execution, and storage formats.
- Ability to debug distributed systems end-to-end (compute, storage, network, orchestration).
- Familiarity with cloud environments, containerization (Docker), and monitoring.
- Experience with large-scale data — high throughput, billions of rows, large parallel workloads.
- Awareness of cost optimization in compute & storage.
Good to Have
- Experience with emerging stream processors — Dagster, RisingWave, Arroyo.
- Kubernetes, Terraform, or cloud-native big-data stacks.
Mindset
- Strong ownership — takes systems from design → build → monitor.
- Self-driven, independent, and comfortable making technical decisions.
- High attention to reliability, data accuracy, and operational excellence.
- Naturally grows into broader technical responsibility as the platform scales.
Why 1DS is a great choice
- High-trust, no-politics culture — we value communication, ownership, and accountability
- Collaborative, ego-free team — building together is in our DNA
- Learning-first environment — mentorship, peer reviews, and exposure to real business impact
- Modern stack + autonomy — your voice shapes how we build
- VC-funded & scaling fast — 250+ strong, building from India for the world
How to apply for this opportunity?
- Step 1: Click On Apply! And Register or Login on our portal.
- Step 2: Complete the Screening Form & Upload updated Resume
- Step 3: Increase your chances to get shortlisted & meet the client for the Interview!
About Uplers:
Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement.(Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well).So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!