Job Summary: We are looking for a Senior Data Engineer who is creative, collaborative, and adaptable to join our agile team of data scientists, engineers, and UX developers. The role focuses on building and maintaining robust data pipelines to support advanced analytics, data science, and BI solutions. As a Senior Data Engineer, you will work with internal and external data, collaborate with data scientists, and contribute to the design, development, and deployment of innovative solutions. Key Responsibilities: Design, develop, test, and maintain optimal data pipeline and ETL architectures. Map out data systems and define/design required integrations, ETL, BI, and AI systems/processes. Prepare and optimize data for predictive and prescriptive modeling. Collaborate with teams to integrate ERP data into the enterprise data lake, ensuring seamless flow and quality. Enhance cloud data infrastructure on AWS or Azure for scalability and performance. Utilize big data tools and frameworks to optimize data acquisition and preparation. Build architectures to move data to/from data lakes and data warehouses for advanced analytics. Develop and curate data models for analytics, dashboards, and reports. Conduct code reviews, maintain production-level code, and implement testing approaches. Monitor, troubleshoot, and resolve data ingestion workflows to maintain reliability and uptime. Drive innovation and implement efficient new approaches to data engineering tasks. Required Skills and Experience: Bachelors degree in Computer Science, Mathematics, Engineering, or a related field. 5+ years of experience working with enterprise data platforms, including building and managing data lakes. 3-5 years of experience designing and implementing data warehouse solutions. Expertise in SQL, developing stored procedures (SP) and advanced data design concepts. Proficiency in Spark (Python/Scala) and Spark Streaming for real-time data pipelines. Experience with AWS or Azure services (e.g., AWS Glue, Azure Data Factory, Redshift, Snowflake, etc.). Familiarity with big data tools such as Apache Kafka, Apache Spark, or Flink. Hands-on experience with orchestration tools (e.g., Apache Airflow, Prefect). Knowledge of CI/CD processes, version control (e.g., Git, Jenkins), and deployment automation. Experience in integrating ERP data into data lakes is a plus. Experience with traditional ETL tools (e.g., Talend, Pentaho) is an advantage. Strong problem-solving, communication, and collaboration skills. Why Join Us? Be part of a collaborative and agile team driving cutting-edge AI and data engineering solutions. Work on impactful projects that make a difference across industries. Opportunities for professional growth and continuous learning. Competitive salary and benefits package.Role & responsibilities