Home
Jobs
Companies
Resume

14 Cloudera Hadoop Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

4 - 8 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Naukri logo

Site Reliability Engineer Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and JOB DESCRIPTIONS 2 Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred) Job Description: • Familiarity with DataHub, DataMesh, and security best practices is a plus • Strong problem-solving and debugging mindset • Ability to work under pressure in a fast-paced environment. • Excellent communication and collaboration skills. • Ownership, customer orientation, and a bias for action

Posted 2 weeks ago

Apply

8.0 - 13.0 years

22 - 37 Lacs

Pune

Hybrid

Naukri logo

Role & responsibilities Role - Hadoop Admin + Automation Experience 8+ yrs Grade AVP Location - Pune Mandatory Skills : Hadoop Admin, Automation (Shell scripting/ any programming language Java/Python), Cloudera / AWS/Azure/GCP Good to have : DevOps tools Primary focus will be on candidates with Hadoop admin & Automation experience,

Posted 2 weeks ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)

Posted 3 weeks ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Gurgaon / Gurugram, Haryana, India

On-site

Foundit logo

Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)

Posted 3 weeks ago

Apply

4.0 - 9.0 years

4 - 9 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred)

Posted 3 weeks ago

Apply

4.0 - 9.0 years

5 - 8 Lacs

Gurugram

Work from Office

Naukri logo

Requirements : We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetes ensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities • Ensure platform uptime and application health as per SLOs/KPIs • Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. • Debug and resolve complex production issues, performing root cause analysis • Automate routine tasks and implement self-healing systems • Design and maintain dashboards, alerts, and operational playbooks • Participate in incident management, problem resolution, and RCA documentation • Own and update SOPs for repeatable processes • Collaborate with L3 and Product teams for deeper issue resolution • Support and guide L1 operations team • Conduct periodic system maintenance and performance tuning • Respond to user data requests and ensure timely resolution • Address and mitigate security vulnerabilities and compliance issues Technical Skillset • Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger • Strong Linux fundamentals and scripting (Python, Shell) • Experience with Apache NiFi, Airflow, Yarn, and Zookeeper • Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki • Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines • Strong SQL skills (Oracle/Exadata preferred)

Posted 3 weeks ago

Apply

5 - 6 years

7 - 8 Lacs

Gurugram

Work from Office

Naukri logo

Site Reliability Engineer Job Description: Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred) Familiarity with DataHub, DataMesh, and security best practices is a plus Strong problem-solving and debugging mindset Ability to work under pressure in a fast-paced environment. Excellent communication and collaboration skills. Ownership, customer orientation, and a bias for action

Posted 1 month ago

Apply

5 - 6 years

7 - 8 Lacs

Gurugram

Work from Office

Naukri logo

Site Reliability Engineer Job Description: Requirements: We are seeking a proactive and technically strong Site Reliability Engineer (SRE) to ensure the stability, performance, and scalability of our Data Engineering Platform. You will work on cutting-edge technologies including Cloudera Hadoop, Spark, Airflow, NiFi, and Kubernetesensuring high availability and driving automation to support massive-scale data workloads, especially in the telecom domain. Key Responsibilities Ensure platform uptime and application health as per SLOs/KPIs Monitor infrastructure and applications using ELK, Prometheus, Zabbix, etc. Debug and resolve complex production issues, performing root cause analysis Automate routine tasks and implement self-healing systems Design and maintain dashboards, alerts, and operational playbooks Participate in incident management, problem resolution, and RCA documentation Own and update SOPs for repeatable processes Collaborate with L3 and Product teams for deeper issue resolution Support and guide L1 operations team Conduct periodic system maintenance and performance tuning Respond to user data requests and ensure timely resolution Address and mitigate security vulnerabilities and compliance issues Technical Skillset Hands-on with Spark, Hive, Cloudera Hadoop, Kafka, Ranger Strong Linux fundamentals and scripting (Python, Shell) Experience with Apache NiFi, Airflow, Yarn, and Zookeeper Proficient in monitoring and observability tools: ELK Stack, Prometheus, Loki Working knowledge of Kubernetes, Docker, Jenkins CI/CD pipelines Strong SQL skills (Oracle/Exadata preferred) Familiarity with DataHub, DataMesh, and security best practices is a plus Strong problem-solving and debugging mindset Ability to work under pressure in a fast-paced environment. Excellent communication and collaboration skills. Ownership, customer orientation, and a bias for action

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Bengaluru

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Chennai

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Mumbai

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

12 - 16 years

35 - 40 Lacs

Kolkata

Work from Office

Naukri logo

As AWS Data Engineer at organization, you will play a crucial role in the design, development, and maintenance of our data infrastructure. Your work will empower data-driven decision-making and contribute to the success of our data-driven initiatives You will design and maintain scalable data pipelines using AWS data analytical resources, enabling efficient data processing and analytics. Key Responsibilities: - Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 1 month ago

Apply

11 - 14 years

35 - 40 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

- Highly experiences in developing ETL pipelines using AWS Glue and EMR with PySpark/Scala. - Utilize AWS services (S3, Glue, Lambda, EMR, Step Functions) for data solutions. - Design scalable data models for analytics and reporting. - Implement data validation, quality, and governance practices. - Optimize Spark jobs for cost and performance efficiency. - Automate ETL workflows with AWS Step Functions and Lambda. - Collaborate with data scientists and analysts on data needs. - Maintain documentation for data architecture and pipelines. - Experience with Open source bigdata file formats such as Iceberg or delta or Hundi - Desirable to have experience in terraforming AWS data analytical resources. Must-Have Skills: - AWS (S3, Glue, EMR Lambda, EMR), PySpark or Scala, SQL, ETL development. Good-to-Have Skills: - Snowflake, Cloudera Hadoop (HDFS, Hive, Impala), Iceberg

Posted 3 months ago

Apply

7 - 12 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Business Case: Caspian is the big Data Cluster for NFRT managed and hosted by the Central Data team. It is a critical TIER 1 platform for multiple business functions and processes to operate across NFRT. Given the technology strategy and principles, Data driven design and products are a key pillar and this position is extremely critical to strengthen the current system and continue to build/develop as per the future objectives/strategy of NFRT. :As a Big Data Platform Engineer you will be responsible for the technical delivery of our Data Platform's core functionality and strategic solutions. This includes the development of reusable tooling/API's, applications, data stores, and software stack to accelerate our relational data warehousing, big data analytics and data management needs. This individual will also be responsible for designing and developing strategic solutions that utilize big data, cloud and other modern technologies in order to meet our constantly changing business requirement. Day-to-day management of several small development teams focused on our Big Data platform and Data management applications and Collaboration and co-ordination with multiple stake holders like Hadoop Data Engineering Team, Application Team and Unix Ops Team to ensure the stability of our Big Data platform. Skills Required: Strong technical experience in Scala, Java, Python and Spark for designing , creating and maintaining big data applications.Experience maintaining Cloudera Hadoop infrastructure such as HDFS, YARN, Spark, Impala and edge nodes.Experience with developing Cloud based Big Data solutions on AWS or Azure Strong SQL skills with commensurate experience in a large database platform Experience in complete SDLC process and Agile MethodologyStrong oral and written communication Experience with Cloud Data Platforms like Snowflake or Databricks is an added advantage

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies