5+ years of experience in data engineering, data analysis, and/or data architecture Strong expertise in SQL and Python · Databricks, Snowflake, Power BI Expereince with distributed data systems such as Hadoop, Spark Experience with Cloud platforms such as Snowflake, BigQuery, Redshift ETL/ELT pipelines (Apache Spark, Airflow, dbt) APIs and data services for real-time and batch processing Familiar with data governance, security, and compliance frameworks Perform descriptive statistics, EDA, and inferential statistics Design and implement scalable, secure, high-performance data architectures (data lakes, warehouses, real-time pipelines) Understanding of APIs and how applications connect to AI engines (e.g., Truck API platform). Deep API development experience not required. Job Types: Full-time, Permanent Pay: ₹250,000.00 - ₹300,000.00 per month Work Location: In person
As a Data Engineer with 5+ years of experience, your role will involve working with data engineering, data analysis, and data architecture. You will be expected to demonstrate strong expertise in SQL and Python. Additionally, you should be familiar with tools such as Databricks, Snowflake, and Power BI. Experience with distributed data systems like Hadoop and Spark, as well as Cloud platforms such as Snowflake, BigQuery, and Redshift, will be crucial for this role. Key Responsibilities: - Develop and maintain ETL/ELT pipelines using technologies like Apache Spark, Airflow, and dbt. - Utilize APIs and data services for real-time and batch processing. - Ensure data governance, security, and compliance frameworks are adhered to. - Conduct descriptive statistics, exploratory data analysis (EDA), and inferential statistics. - Design and implement scalable, secure, high-performance data architectures including data lakes, warehouses, and real-time pipelines. Qualifications Required: - 5+ years of experience in data engineering, data analysis, and/or data architecture. - Strong expertise in SQL and Python. - Experience with distributed data systems such as Hadoop and Spark. - Familiarity with Cloud platforms like Snowflake, BigQuery, and Redshift. - Knowledge of APIs and their role in connecting applications to AI engines. In this role, you will not only apply your technical skills but also contribute to the design and implementation of data solutions that drive business success. Please note that this is a full-time, permanent position with an in-person work location. ,