Data Engineer - SQL, Python Location: Chennai (Work from Office - 5 Days a Week) Experience: 7+ Years Employment Type: Full-time About the role: We are looking for a Data Engineer experienced in SQL, DBT, and data modeling with familiarity in legacy systems. This role requires a hands-on builder with an eye for optimization and system reliability. Key Responsibilities: Develop data pipelines using SQL and DBT for data transformation and integration. Migrate and modernize data from legacy systems like SQL Server, Teradata, HANA, or Hadoop. Collaborate with cross-functional teams to understand data needs and deliver effective solutions. Monitor and optimize data flow performance. Maintain data quality, governance, and version control across environments. Requirements: 7+ years of experience in data engineering with SQL and Dbt core. Hands-on experience with at least one legacy platform (SQL Server, Teradata, HANA, or Hadoop). Strong understanding of data modeling concepts (dimensional, star/snowflake schema). Experience or familiarity with Click House is a huge plus, Supersets, Tableau. Excellent coding, debugging, and problem-solving skills Bonus points: Experience with cloud data warehouses (Snowflake, Big Query, Redshift). Immediate joiners are preferred and eligible for a joining bonus. Interested candidates please share your updated resume on anamika@enroutecorp.in Show more Show less
The role of Data Engineer requires a seasoned professional with over 7 years of experience in data engineering. You should be well-versed in SQL, DBT, and data modeling, particularly with legacy systems. Your primary responsibilities will include developing data pipelines for transformation and integration, migrating data from legacy systems such as SQL Server, Teradata, HANA, or Hadoop, collaborating with different teams to address data requirements, optimizing data flow performance, and ensuring data quality and governance. To excel in this role, you must possess strong expertise in SQL and Dbt core, hands-on experience with at least one legacy platform, and a solid understanding of data modeling concepts like dimensional and star/snowflake schema. Familiarity with Click House, Superset, and Tableau will be advantageous. The ideal candidate will also demonstrate excellent coding, debugging, and problem-solving skills. Moreover, experience with cloud data warehouses (Snowflake, Big Query, Redshift) will be considered a bonus. Immediate joiners are preferred, and they may be eligible for a joining bonus. If you are interested in this opportunity, please send your updated resume to anamika@enroutecorp.in.,