Job Summary: We are seeking a skilled and detail-oriented Big Data Engineer with strong expertise in Apache Spark to join our data team. Key Responsibilities: Design and develop scalable data processing solutions using Apache Spark (Core, SQL, Streaming, MLlib). Build and optimize data pipelines and ETL processes for structured and unstructured data. Collaborate with data scientists, analysts, and software engineers to integrate Spark-based data products. Ensure data quality, integrity, and security across the data lifecycle. Monitor, troubleshoot, and improve performance of Spark jobs in production. Integrate Spark with cloud platforms such as AWS, Azure, or GCP. Required Skills and Qualifications: 3+ years of hands-on experience with Apache Spark in large-scale data environments. Strong proficiency in Scala, Python , or Java (with a preference for Scala). Experience with data storage technologies such as HDFS, Hive, HBase, S3 , etc. Familiarity with SQL , Kafka , Airflow , and NoSQL databases . Experience with containerization and orchestration tools (e.g., Docker, Kubernetes) is a plus. Solid understanding of distributed systems and parallel computing.