Software Engineer - Data

5 - 10 years

7 - 12 Lacs

Posted:1 hour ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

The CoinDCX Journey: Building the Future of FinanceAt CoinDCX, our mission is clear - to make crypto and blockchain accessible to every Indian and enable them to participate in the future of finance.
As India s first crypto unicorn valued at $2. 45B, we are reshaping the financial ecosystem by building safe, transparent, and scalable products that power adoption at scale. We believe that change starts together. It begins with bold ideas, relentless execution and people who want to build what s next. If you re driven by purpose and thrive in environments where your work defines the next chapter of an industry, you ll feel right at home here. About the RoleWe are hiring an SDE-1 Data Engineer (Individual Contributor) to execute high-quality data engineering work across ingestion pipelines, data quality, monitoring, curated datasets, and heavy third-party/vendor integrations. This role is pure execution: writing code, fixing issues, adding validations, and ensuring reliable, timely data delivery. You will work hands-on with Spark, Databricks, Python, Kafka, AWS (S3/EC2/Lambda) and internal CDC + ingestion frameworks. What You ll Do1. Build & Enhance Data Pipelines (Internal + External Ingestion) Develop ingestion pipelines for internal data (CDC, service DBs). Build and maintain ingestion from external vendors & third parties including: Custody providers, Trading partners (TPE), Banking partners, External APIs (REST-based integrations) Handle pagination, rate limits, incremental loads, retries, and backoffs. Implement Spark-based transformations on Databricks. 2. Implement Data Quality Checks Add schema validations, field-level checks, null/boundary checks. Maintain 99% data quality for assigned datasets. Quickly identify & fix data mismatches caused by source/vendor changes. 3. Monitoring, Alerts & Observability Configure alerts for: Freshness, Latency, Data quality, Pipeline failures Add logs and metrics to improve troubleshooting. Ensure MTTR 80% test coverage and 0 PR hygiene rejections. Execute tasks with 95%+ on-time delivery. Provide crisp updates with minimal follow-ups. You ll Excel in This Role If You Must-Have Python + SQL proficiency Basic to intermediate Spark/PySpark knowledge Experience with ETL / ingestion pipelines Understanding of APIs (GET/POST, tokens, pagination) Strong debugging skills Fast execution and high ownershipGood-to-Have Databricks experience Kafka basics Financial/transaction data exposure AWS fundamentals Experience integrating with third-party APIsYou ll Know You re Winning When 3-5 pipelines owned independently with 99. 9% uptime External vendor integrations functioning reliably ( Collaboration, speed and trust come alive when teams share the same space. With this belief, we operate as a work-from-office organisation. This role is based out of our (Bengaluru) office, where energy, alignment and innovation move in real time.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Coindcx logo
Coindcx

Cryptocurrency

Mumbai

RecommendedJobs for You