Posted:1 week ago|
Platform:
On-site
Full Time
About Traya Health: Traya is an Indian direct-to-consumer hair care brand platform providing a holistic treatment for consumers dealing with hair loss. The Company provides personalised consultations that help determine the root cause of hair fall among individuals, along with a range of hair care products that are curated from a combination of Ayurveda, Allopathy, and Nutrition. Traya's secret lies in the power of diagnosis. Our unique platform diagnoses the patient’s hair & health history, to identify the root cause behind hair fall and delivers customized hair kits to them right at their doorstep. We have a strong adherence system in place via medically-trained hair coaches and proprietary tech, where we guide the customer across their hair growth journey, and help them stay on track. Traya is founded by Saloni Anand, a techie-turned-marketeer and Altaf Saiyed, a Stanford Business School alumnus. Our Vision: Traya was created with a global vision to create awareness around hair loss, de-stigmatise it while empathising with the customers that it has an emotional and psychological impact. Most importantly, to combine 3 different sciences (Ayurveda, Allopathy and Nutrition) to create the perfect holistic solution for hair loss patients. Role Overview: As a Senior Data Engineer, you will architect, build, and maintain our data infrastructure that powers critical business decisions. You will work closely with data scientists, analysts, and product teams to design and implement scalable solutions for data processing, storage, and retrieval. Your work will directly impact our ability to leverage data for business intelligence, machine learning initiatives, and customer insights. Key Responsibilities: ● Design, build, and maintain our end-to-end data infrastructure on AWS and GCP cloud platforms ● Develop and optimize ETL/ELT pipelines to process large volumes of data from multiple sources ● Build and support data pipelines for reporting, analytics, and machine learning applications ● Implement and manage streaming data solutions using Kafka and other technologies ● Design and optimize database schemas and data models in ClickHouse and other databases ● Develop and maintain data workflows using Apache Airflow and similar orchestration tools ● Write efficient, maintainable, and scalable code using PySpark and other data processing frameworks ● Collaborate with data scientists to implement ML infrastructure for model training and deployment ● Ensure data quality, reliability, and security across all data platforms ● Monitor data pipelines and implement proactive alerting systems ● Troubleshoot and resolve data infrastructure issues ● Document data flows, architectures, and processes ● Mentor junior data engineers and contribute to establishing best practices ● Stay current with industry trends and emerging technologies in data engineering Qualifications Required ● Bachelor's degree in Computer Science, Engineering, or related technical field (Master's preferred) ● 5+ years of experience in data engineering roles ● Strong expertise in AWS and/or GCP cloud platforms and services ● Proficiency in building data pipelines using modern ETL/ELT tools and frameworks ● Experience with stream processing technologies such as Kafka ● Hands-on experience with ClickHouse or similar analytical databases ● Strong programming skills in Python and experience with PySpark ● Experience with workflow orchestration tools like Apache Airflow ● Solid understanding of data modeling, data warehousing concepts, and dimensional modeling ● Knowledge of SQL and NoSQL databases ● Strong problem-solving skills and attention to detail ● Excellent communication skills and ability to work in cross-functional teams Preferred ● Experience in D2C, e-commerce, or retail industries ● Knowledge of data visualization tools (Tableau, Looker, Power BI) ● Experience with real-time analytics solutions ● Familiarity with CI/CD practices for data pipelines ● Experience with containerization technologies (Docker, Kubernetes) ● Understanding of data governance and compliance requirements ● Experience with MLOps or ML engineering Technologies ● Cloud Platforms: AWS (S3, Redshift, EMR, Lambda), GCP (BigQuery, Dataflow, Dataproc) ● Data Processing: Apache Spark, PySpark, Python, SQL ● Streaming: Apache Kafka, Kinesis ● Data Storage: ClickHouse, S3, BigQuery, PostgreSQL, MongoDB ● Orchestration: Apache Airflow ● Version Control: Git ● Containerization: Docker, Kubernetes (optional) What We Offer ● Competitive salary and comprehensive benefits package ● Opportunity to work with cutting-edge data technologies ● Professional development and learning opportunities ● Modern office in Mumbai with great amenities ● Collaborative and innovation-driven culture ● Opportunity to make a significant impact on company growth Show more Show less
Traya
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Traya
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
10.0 - 11.0 Lacs P.A.
9.0 - 13.0 Lacs P.A.
0.6 - 1.75 Lacs P.A.
Hyderabad, Ahmedabad
7.0 - 9.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
7.0 - 11.0 Lacs P.A.
Chennai, Tamil Nadu, India
Salary: Not disclosed
Bengaluru
15.0 - 25.0 Lacs P.A.
4.0 - 12.0 Lacs P.A.
Hyderābād
4.375 - 9.5 Lacs P.A.
Mumbai
Experience: Not specified
3.45 - 7.0 Lacs P.A.