About Us: Zeza is a vibrant team of programmers, data engineers, and data scientists passionate about innovation. We specialize in using data to drive business growth, solve complex challenges, and optimize operations. Through advanced analytics, AI, and machine learning, we transform raw data into actionable insights, helping businesses make informed decisions and achieve lasting success. Job Description: We are seeking a skilled Data Engineer with strong expertise in PySpark and AWS to design and build robust data pipelines, leverage AWS SageMaker for machine learning workflows, and develop BI dashboards. The ideal candidate should have proven experience in handling large datasets, optimizing data processing, and delivering efficient, scalable solutions. Key Responsibilities: · Design, develop, and maintain scalable data pipelines using PySpark and AWS data services. · Implement efficient ETL processes to handle large, complex datasets. · Integrate and manage AWS SageMaker workflows for machine learning models. · Collaborate with data scientists, analysts, and stakeholders to deliver accurate and timely data solutions. · Develop and maintain BI dashboards to support business decision-making. · Ensure data quality, reliability, and security across all platforms. · Optimize data storage and processing for performance and cost-efficiency. Required Skills & Qualifications: Experience: 4+years in data engineering, ETL pipeline creation, and cloud technologies. Technical Skills: Hands-on experience with AWS services (S3, Lambda, Glue, SageMaker, etc.). Strong knowledge of SQL, Python, or Spark . Expertise in BI tools (Power BI, Tableau, or similar). Understanding of data warehousing and modeling . Ability to work independently and meet project deadlines.