Posted:1 day ago|
Platform:
On-site
Full Time
At Asign, we are revolutionizing the art sector with our innovative digital solutions. We are a passionate and dynamic startup dedicated to enhancing the art experience through technology. Join us in creating cutting-edge products that empower artists and art enthusiasts worldwide.
The ideal candidate will be responsible for sourcing data, ingesting, transforming, storing, and making data accessible and reliable for data analysis, machine learning, and reporting. You will play a key role in maintaining and evolving our data architecture and ensuring that our data flows efficiently and securely.
● Design, develop, and maintain efficient and scalable ELT data pipelines.
● Work closely with the data science and backend teams to understand data needs and transform raw inputs into structured datasets.
● Integrate multiple data sources including APIs, web pages, spreadsheets, and databases into a central warehouse.
● Monitor, test, and continuously improve data flows for reliability and performance.
● Create documentation and establish best practices for data governance, lineage, and quality.
● Collaborate with product and tech teams to plan data models that support business and AI/ML applications.
● Minimum 5 years of hands-on experience in data engineering.
● Practical knowledge of one or more orchestrators (Dagster, Airflow, Prefect, etc.).
● Proficiency in Python and SQL.
● Experience working with APIs and data integration from multiple sources.
● Familiarity with one or more cloud data warehouses (e.g., Snowflake, BigQuery, Redshift).
● Strong problem-solving and debugging skills.
● Bachelor’s/Master’s degree in Computer Science, Engineering, Statistics, or a related field
● Proven experience (5+ years) in data engineering, data integration, and data management
● Hands-on experience in data sourcing tools and frameworks (e.g. Scrapy, Beautiful Soup, Selenium, Playwright)
● Proficiency in Python and SQL for data manipulation and pipeline development
● Experience with cloud-based data platforms (AWS, Azure, or GCP) and data warehouse tools (e.g. Redshift, BigQuery, Snowflake)
● Familiarity with workflow orchestration tools (e.g. Airflow, Prefect, Dagster)
● Strong understanding of relational and non-relational databases (PostgreSQL, MongoDB, etc.)
● Solid understanding of data modeling, ETL best practices, and data governance principles
● Systems knowledge and experience working with Docker.
● Strong and creative problem-solving skills and the ability to think critically about data engineering solutions.
● Effective communication and collaboration skills
● Ability to work independently and as part of a team in a fast-paced, dynamic environment.
● Experience working with APIs and third-party data sources
● Familiarity with version control (Git) and CI/CD processes
● Exposure to basic machine learning concepts and working with data science teams
● Experience handling large datasets and working with distributed data systems
Asign
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowKolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
30.0 - 35.0 Lacs P.A.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
30.0 - 35.0 Lacs P.A.
Bengaluru
30.0 - 35.0 Lacs P.A.
Hyderabad
30.0 - 35.0 Lacs P.A.
Pune, Chennai, Bengaluru
7.5 - 17.0 Lacs P.A.
10.0 - 18.0 Lacs P.A.
Hyderābād
7.0 - 8.62576 Lacs P.A.
Hyderābād
4.725 - 8.575 Lacs P.A.
Bengaluru
Experience: Not specified
6.0 - 8.5625 Lacs P.A.
Bengaluru
6.75 - 9.0 Lacs P.A.