Posted:6 days ago|
Platform:
Work from Office
Full Time
Job Title: Data Engineer Location: Gurugram (WFO) Experience: 4-6 years Department: Engineering / Data & Analytics About Aramya At Aramya, were redefining fashion for Indias underserved Gen X/Y women, offering size-inclusive, comfortable, and stylish ethnic wear at affordable prices. Launched in 2024, weve already achieved 40 Cr in revenue in our first year, driven by a unique blend of data-driven design, in-house manufacturing, and a proprietary supply chain. Today, with an ARR of 100 Cr, were scaling rapidly with ambitious growth plans for the future. Our vision is bold to build the most loved fashion and lifestyle brands across the world while empowering individuals to express themselves effortlessly. Backed by marquee investors like Accel and Z47, were on a mission to make high-quality ethnic wear accessible to every woman. We’ve built a community of loyal customers who love our weekly design launches, impeccable quality, and value-for-money offerings. With a fast-moving team driven by creativity, technology, and customer obsession, Aramya is more than a fashion brand—it’s a movement to celebrate every woman’s unique journey. Role Overview We’re looking for a results-driven Data Engineer who will play a key role in building and scaling our data infrastructure. This individual will own our end-to-end data pipelines, backend services for analytics, and infrastructure automation—powering real-time decision-making across our business. This is a high-impact role for someone passionate about data architecture, cloud engineering, and creating a foundation for scalable insights in a fast-paced D2C environment. Key Responsibilities Design, build, and manage scalable ETL/ELT pipelines using tools like Apache Airflow, Databricks, or Spark. Own and optimize data lakes and data warehouses on AWS Redshift (or Snowflake/BigQuery). Develop robust and scalable backend APIs using Python (FastAPI/Django/Flask) or Node.js. Integrate third-party data sources (APIs, SFTP, flat files) and ensure data validation and consistency. Ensure high availability, observability, and fault-tolerance of data systems via logging, monitoring, and alerting. Collaborate with analysts, product managers, and business stakeholders to gather requirements and define data contracts. Implement Infrastructure-as-Code using tools like Terraform or AWS CDK to automate data workflows and provisioning. Must-Have Skills Proficiency in SQL and data modeling for both OLTP and OLAP systems. Strong Python skills, with demonstrated experience in both backend and data engineering use cases. Hands-on experience with Databricks , Apache Spark , and AWS Redshift . Experience in Airflow , dbt , or other workflow orchestration tools. Working knowledge of REST APIs , backend architectures, and microservices. Familiarity with Docker , Git , and CI/CD pipelines . Experience working on AWS cloud (S3, Lambda, ECS/Fargate, CloudWatch, etc.). Nice-to-Have Skills Experience with streaming platforms like Kafka , Flink , or Kinesis . Exposure to Snowflake , BigQuery , or Delta Lake . Understanding of data governance and PII handling best practices. Experience with GraphQL , gRPC , or event-driven architectures. Familiarity with data observability tools like Monte Carlo , Great Expectations , or Datafold . Prior experience in D2C, e-commerce, or high-growth startup environments . Qualifications Bachelor’s degree in Computer Science, Data Engineering, or related technical discipline. 4–6 years of experience in data engineering roles with strong backend and cloud integration exposure.
Dslr Technologies
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Chennai
4.0 - 7.0 Lacs P.A.
Bengaluru
25.0 - 30.0 Lacs P.A.
Mumbai
4.0 - 5.0 Lacs P.A.
Pune, Chennai, Bengaluru
0.5 - 0.5 Lacs P.A.
Chennai, Malaysia, Malaysia, Kuala Lumpur
7.0 - 11.0 Lacs P.A.
5.0 - 8.0 Lacs P.A.
Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad
25.0 - 35.0 Lacs P.A.
Hyderabad
25.0 - 30.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.