Remote
Full Time
We act as an incubator for smart data products in all business areas, from sales to logistics and marketing to purchasing. This approach has allowed us to grow rapidly and launch many value-creating use cases from the management of spare parts to the allocation of media spending.
1. End-to-End Data Product Development
- Own the full lifecycle of data solutions, from ETL/ELT pipelines to ML model deployment and performance tracking.
- Translate ambiguous business requirements into actionable technical roadmaps and deliver scalable solutions.
- Drive projects independently while collaborating with Data Engineers, MLOps, and stakeholders.
- Build, validate, and deploy ML models (classification, regression, anomaly detection, clustering) using scikit-learn pipelines.
- Apply advanced techniques (e.g., Bayesian methods, time-series forecasting, optimization) where needed.
- Ensure robust model validation, drift detection, and performance monitoring.
- Design and optimize large-scale data pipelines (batch/streaming) using PySpark, Dask, or Kafka.
- Implement cost-efficient cloud solutions on Azure with CI/CD best practices.
- Write clean, maintainable, and testable Python code following OOP principles and unit testing standards.
- Develop interactive dashboards (Streamlit, Plotly Dash, PowerBI) to communicate insights.
- Deliver clear, actionable reports (EDA, model performance, data quality) for technical and non-technical audiences.
- Mentor junior team members and lead technical discussions on architecture and design.
- Advocate for agile methodologies, automation, and scalable data practices.
Core Qualifications Technical & Professional Requirements
- Experience: 8+ years in full-stack data science, covering data pipelines, ML, and deployment.
- Programming & ML: - Expert in Python (OOP, design patterns, unit testing).
- Proficient in scikit-learn, pandas, PySpark, SQL. - Experience with TensorFlow/PyTorch (nice-to-have).
- Hands-on with Azure
- Familiarity with CI/CD, Docker, Kubernetes.
- Strong feature engineering, model validation, and drift detection skills.
- Experience with real-time data (Kafka, Spark Streaming).
- Self-starter with ownership mindset and business-value focus.
- Excellent stakeholder management (written/verbal).
- Agile team experience with strong collaboration skills. Bonus Skills (Preferred but Not Required)
- Graph algorithms, causal inference, or clustering.
- IoT data modeling and deployment.
- Spark ML or MLOps experience.
✅ High Impact: Drive decisions with data products that directly influence business strategy.
✅ Modern Tech Stack: Work with cutting-edge tools (AWS, SageMaker, Spark, Kubernetes).
✅ Growth & Leadership: Mentor talent, shape best practices, and grow your technical leadership.
✅ Flexibility: Remote-friendly culture with competitive compensation. ndra, Hadoop, Spark, Tableau)
HireFlex
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python Now25.0 - 30.0 Lacs P.A.
bengaluru
35.0 - 40.0 Lacs P.A.
bengaluru
11.0 - 15.0 Lacs P.A.
bengaluru
25.0 - 40.0 Lacs P.A.
bangalore rural, bengaluru
16.0 - 25.0 Lacs P.A.
ahmedabad
6.0 - 16.0 Lacs P.A.
bengaluru, karnataka, india
Salary: Not disclosed
chennai, tamil nadu, india
Salary: Not disclosed
pune, maharashtra, india
Salary: Not disclosed
Salary: Not disclosed