Role Description We are looking for a skilled Data Engineer with strong experience in Python, PySpark, and cloud-based big data platforms to join our team. If you are passionate about building scalable, high-performing data systems and can join us immediately, we’d love to hear from you! 📌 Location: Remote 🕒 Availability: Immediate Joiners Preferred Responsibilities: Design, develop, and maintain scalable and distributed data pipelines. Work on high availability and fault-tolerant systems for large-scale data environments. Collaborate with cross-functional teams to build and optimize data-driven platforms. Leverage AWS services, Snowflake, or Databricks to deliver modern data solutions. Implement serverless architectures and Lambda-based workflows. Requirements: ✅ 5+ years of software development experience. ✅ Strong expertise in Python Development and PySpark. ✅ 3+ years of experience working in large data environments. ✅ Experience with Snowflake or Databricks. ✅ Expertise in high availability & distributed systems. ✅ Strong knowledge of data engineering principles and data-driven decisioning platforms. ✅ Hands-on experience with a variety of AWS services. ✅ Experience with serverless architectures and Lambdas. Nice to Have: Previous experience in Scala or Java. Knowledge of event-driven architectures (Kafka or similar). Exposure to Financial Services / FinTech domain. ✨ Why Join Us? Exciting projects with modern data platforms. Collaborative and innovative work culture. Opportunity to make an immediate impact. 📩 If you’re ready for your next challenge and can join immediately, apply now or share your resume at info@hacknotech.com.