Responsibilities: Lead and manage an offshore team of data engineers, providing strategic guidance, mentorship, and support to ensure the successful delivery of projects and the development of team members. Collaborate closely with onshore stakeholders to understand project requirements, allocate resources efficiently, and ensure alignment with client expectations and project timelines. Drive the technical design, implementation, and optimization of data pipelines, ETL processes, and data warehouses, ensuring scalability, performance, and reliability. Define and enforce engineering best practices, coding standards, and data quality standards to maintain high-quality deliverables and mitigate project risks. Stay abreast of emerging technologies and industry trends in data engineering, and provide recommendations for tooling, process improvements, and skill development. Assume a data architect role as needed, leading the design and implementation of data architecture solutions, data modeling, and optimization strategies. Demonstrate proficiency in AWS services such as: Expertise in cloud data services, including AWS services like Amazon Redshift, Amazon EMR, and AWS Glue, to design and implement scalable data solutions. Experience with cloud infrastructure services such as AWS EC2, AWS S3, to optimize data processing and storage. Knowledge of cloud security best practices, IAM roles, and encryption mechanisms to ensure data privacy and compliance. Proficiency in managing or implementing cloud data warehouse solutions, including data modeling, schema design, performance tuning, and optimization techniques. Demonstrate proficiency in modern data platforms such as Snowflake and Databricks, including: Deep understanding of Snowflake's architecture, capabilities, and best practices for designing and implementing data warehouse solutions. Hands-on experience with Databricks for data engineering, data processing, and machine learning tasks, leveraging Spark clusters for scalable data processing. Ability to optimize Snowflake and Databricks configurations for performance, scalability, and cost-effectiveness. Manage the offshore team's performance, including resource allocation, performance evaluations, and professional development, to maximize team productivity and morale. Qualifications: Bachelor's degree in computer science, Engineering, or a related field; advanced degree preferred. 10+ years of experience in data engineering, with a proven track record of leadership and technical expertise in managing complex data projects. Proficiency in programming languages such as Python, Java, or Scala, and expertise in SQL and relational databases (e.g., PostgreSQL, MySQL). Strong understanding of distributed computing, cloud technologies (e.g., AWS), and big data frameworks (e.g., Hadoop, Spark). Experience with data architecture design, data modeling, and optimization techniques. Excellent communication, collaboration, and leadership skills, with the ability to effectively manage remote teams and engage with onshore stakeholders. Proven ability to adapt to evolving project requirements and effectively prioritize tasks in a fast-paced environment.
About the Role We are seeking a Mid-Level Enterprise Data Warehouse (EDW) Engineer with a passion for building scalable, cloud-native data solutions. This position is ideal for individuals who excel in collaborative environments, communicate effectively, and approach challenges with a problem-solving mindsetnot just those who follow instructions. Key Responsibilities Design, develop, and maintain high-performance data pipelines and ETL/ELT workflows for the enterprise data warehouse. Work with cloud-based data warehouse platforms like Snowflake, BigQuery, or Redshift to optimize data storage and retrieval. Write clean, efficient, and maintainable SQL and Python code for data transformation and automation tasks. Implement and manage CI/CD pipelines for data workflows using tools like Git, Jenkins, or GitHub Actions. Leverage orchestration tools (e.g., Apache Airflow, dbt Cloud, Prefect) to schedule and monitor data workflows. Conduct detailed data analysis between current and target systems, and prepare mapping documentation. Collaborate with data analysts, scientists, and business teams to generate actionable insights. Proactively identify and address data quality issues and performance bottlenecks. Contribute to data architecture decisions and establish best practices. Required Qualifications 4-15 years of experience in Data Engineering or EDW Development. Strong hands-on experience with Snowflake, BigQuery, or Redshift. Expertise in SQL and Python. Working knowledge of CI/CD tools such as Git, Jenkins, and GitHub Actions. Experience with workflow orchestration tools like Airflow and Prefect. Ability to analyze large datasets and present findings in a business context. Excellent communication and teamwork skills. A proactive, solution-oriented mindset with strong ownership and accountability. Preferred Qualifications Experience in data modeling and architecture. Understanding of data governance, security, and compliance best practices. Familiarity with modern data stack tools such as dbt, Fivetran, or Looker. Experience with large-scale enterprise data warehouse implementations.