Deep Hadoop ecosystem knowledge: Spark, HDFS, Hive, Iceberg Batch and streaming ETL pipeline designscalability and reliability Python & Unix proficiency, materialized views, data structures/algorithms Data warehouse security, compliance, and operational excellence Automated pipeline design & optimization, platform collaboration Monitoring and logging: Prometheus, Grafana, Jenkins/GitHub Actions. Skills-Big data solutions, Hadoop data warehouse, analytics infrastructure.Building scalable pipelines, system optimization, architectural reviews.Spark, HDFS, Hive, Iceberg, Kafka, Flink, Python, Unix, SQL
Job Role: Data Engineer Exp: 5+ Years Loc: Bangalore Deep Hadoop ecosystem knowledge: Spark, HDFS, Hive, Iceberg. Batch and streaming ETL pipeline design scalability and reliability Python & Unix proficiency, materialized views, data structures/algorithms. Data warehouse security, compliance, and operational excellence. Automated pipeline design & optimization, platform collaboration Monitoring and logging: Prometheus, Grafana, Jenkins/GitHub Actions Testing frameworks: JUnit/pytest , code cleanliness Skills -Big data solutions, Hadoop data warehouse, analytics infrastructure. Building scalable pipelines, system optimization, architectural reviews. Spark, HDFS, Hive, Iceberg, Kafka, Flink, Python, Unix, SQL Emphasis- Moderate focus on system performance, incident response in Hadoop Architecture, data pipeline and warehouse documentation Collaboration with data/platform engineers, contributing to architecture Prometheus/Grafana, Jenkins, GitHub Actions, testing frameworks Mandatory Skills: Hadoop,HDFS, Hive, Iceberg, Kafka, Flink, Python, Unix, SQL,Testing frameworks: JUnit/pytest, Jenkins