Position Summary
We are seeking an
Apache Hadoop - Subject Matter Expert (SME)
who will be responsible for designing, optimizing, and scaling Impala, Spark-based data processing systems. This role involves hands-on experience in Impala and Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Impala and Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming.If you’re passionate about working in fast-paced, dynamic environments and want to be part of the cutting edge of data solutions, this role is for you.
We’re Looking For Someone Who Can
- Design and optimize distributed Spark-based applications, ensuring low-latency, high-throughput performance for big data workloads.
- Troubleshooting: Provide expert-level troubleshooting for any data or performance issues related to Impala and Spark jobs and clusters.
- Data Processing Expertise: Work extensively with large-scale data pipelines using Impala and Spark's core components (Spark SQL, DataFrames, RDDs, Datasets, and structured streaming).
- Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimisation of Impala and Spark jobs to reduce processing time and resource consumption.
- Cluster Management: Collaborate with DevOps and infrastructure teams to manage Impala and Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.).
- Real-time Data: Design and implement real-time data processing solutions using Impala and Apache Spark Streaming or Structured Streaming.
- This role requires flexibility to work in rotational shifts, based on team coverage needs and customer demand. Candidates should be comfortable supporting operations in a 24/7 environment and be willing to adjust their working hours accordingly.
What Makes You The Right Fit For This Position
- Expert in Impala and Apache Spark: In-depth knowledge of Impala and Spark architecture, execution models, and the components (Spark Core, Spark SQL, Spark Streaming, etc.)
- Data Engineering Practices: Solid understanding of ETL pipelines, data partitioning, shuffling, and serialization techniques to optimize Spark jobs.
- Big Data Ecosystem: Knowledge of related big data technologies such as Hadoop, Hive, Kafka, HDFS, and YARN.
- Performance Tuning and Debugging: Demonstrated ability to tune Spark jobs, optimize query execution, and troubleshoot performance bottlenecks.
- Experience with Cloud Platforms: Hands-on experience in running Spark clusters on cloud platforms such as AWS, Azure, or GCP.
- Containerization & Orchestration: Experience with containerized Spark environments using Docker and Kubernetes is a plus.
Good To Have
- Certification in Apache Spark or related big data technologies.
- Experience working with Acceldata's data observability platform or similar tools for monitoring Spark jobs.
- Demonstrated experience with scripting languages like Bash, PowerShell, and Python.
- Familiarity with concepts related to application, server, and network security management.
- Possession of certifications from leading Cloud providers (AWS, Azure, GCP), and expertise in Kubernetes would be significant advantages.