Jobs
Interviews

237 Cloudera Jobs - Page 4

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Apache Spark Good to have skills : AWS GlueMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data solutions are efficient, scalable, and aligned with business objectives. You will also monitor and optimize existing data processes to enhance performance and reliability, making data accessible and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Collaborate with data architects and analysts to design data models that meet business needs.- Develop and maintain documentation for data processes and workflows to ensure clarity and compliance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Spark.- Good To Have Skills: Experience with AWS Glue.- Strong understanding of data processing frameworks and methodologies.- Experience in building and optimizing data pipelines for performance and scalability.- Familiarity with data warehousing concepts and best practices. Additional Information:- The candidate should have minimum 3 years of experience in Apache Spark.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Immediate Openings on Big data engineer/Developer _ Pan India_Contract Experience 5+ Years Skills Big data engineer/Developer Location Pan India Notice Period Immediate . Employment Type Contract Working Mode Hybrid Big data engineer/Developer Spark-Scala HQL, Hive Control-m Jinkins Git Technical analysis and up to some extent business analysis (knowledge about banking products, credit cards and its transactions)

Posted 1 month ago

Apply

5.0 - 8.0 years

4 - 8 Lacs

Telangana

Work from Office

Education Bachelors degree in Computer Science, Engineering, or a related field. A Masters degree is preferred. Experience Minimum of 4+ years of experience in data engineering or a similar role. Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pnadas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 month ago

Apply

5.0 - 10.0 years

6 - 9 Lacs

Hyderabad

Hybrid

1. Understanding the NBA requirements. 2. Provide Subject matter expertise in relation to Pega CDH from technology perspective. 3. Participate actively in the creation and review of the Conceptual Design, Detailed Design, and estimations. 4. Implementing the NBAs as per agreed requirement/solution 5. Supporting the end-to-end testing and provide fixes with quick TAT. 6. Deployment knowledge to manage the implementation activities. 7. Experience in Pega CDH v8.8 multi app or 24.1 and retail banking domain is preferred. 8. Good communication skills Bangalore/Chennai/Hyderabad/Kolkata _ #Notice Period Immediate #Employment TypeContract

Posted 1 month ago

Apply

5.0 - 9.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Looking for senior pyspark developer with 6+ years of hands on experienceBuild and manage large scale data solutions using tools like Pyspark, Hadoop, Hive, Python & SQLCreate workflows to process data using IBM TWSAble to use pyspark to create different reports and handle large datasetsUse HQL/SQL/Hive for ad-hoc query data and generate reports, and store data in HDFS Able to deploy code using Bitbucket, Pycharm and Teamcity.Can manage folks, able to communicate with several teams and can explain problem/solutions to business team in non-tech manner -Primary Skill Pyspark-Hadoop-Spark - One to Three Years,Developer / Software Engineer

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai

Work from Office

Design and implement data architecture and models for Big Data solutions using MapR and Hadoop ecosystems. You will optimize data storage, ensure data scalability, and manage complex data workflows. Expertise in Big Data, Hadoop, and MapR architecture is required for this position.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Chennai

Work from Office

Design and implement Big Data solutions using Hadoop and MapR ecosystem. You will work with data processing frameworks like Hive, Pig, and MapReduce to manage and analyze large data sets. Expertise in Hadoop and MapR is required.

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Chennai

Work from Office

Design, implement, and optimize Big Data solutions using Hadoop technologies. You will work on data ingestion, processing, and storage, ensuring efficient data pipelines. Strong expertise in Hadoop, HDFS, and MapReduce is essential for this role.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Mumbai

Work from Office

Develops data processing solutions using Scala and PySpark.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 10 Lacs

Mumbai

Work from Office

Design and implement big data solutions using Hadoop ecosystem tools like MapR. Develop data models, optimize data storage, and ensure seamless integration of big data technologies into enterprise systems.

Posted 1 month ago

Apply

6.0 - 11.0 years

10 - 14 Lacs

Hyderabad, Pune, Chennai

Work from Office

Job type: contract to hire 10+ years of software development experience building large scale distributed data processing systems/application, Data Engineering or large scale internet systems. Experience of at least 4 years in Developing/ Leading Big Data solution at enterprise scale with at least one end to end implementation Strong experience in programming languages Java/J2EE/Scala. Good experience in Spark/Hadoop/HDFS Architecture, YARN, Confluent Kafka , Hbase, Hive, Impala and NoSQL database. Experience with Batch Processing and AutoSys Job Scheduling and Monitoring Performance analysis, troubleshooting and resolution (this includes familiarity and investigation of Cloudera/Hadoop logs) Work with Cloudera on open issues that would result in cluster configuration changes and then implement as needed Strong experience with databases such as SQL,Hive, Elasticsearch, HBase, etc Knowledge of Hadoop Security, Data Management and Governance Primary Skills: Java/Scala, ETL, Spark, Hadoop, Hive, Impala, Sqoop, HBase, Confluent Kafka, Oracle, Linux, Git, Jenkins CI/CD

Posted 1 month ago

Apply

4.0 - 9.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Immediate job opening for # Python+SQL_C2H_Pan India. #Skill:Python+SQL #Job description: Strong programming skills in Python programming and advance SQL. strong experience in NumPy, Pandas, Data frames Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Experience in Cloud platform, e.g., AWS, GCP, Azure, etc. Experience in distributed technology tools, viz. SQL, Spark, Python, PySpark, Scala Performance Turing Optimize SQL, PySpark for performance Airflow workflow scheduling tool for creating data pipelines GitHub source control tool & experience with creating/ configuring Jenkins pipeline Experience in EMR/ EC2, Databricks etc. DWH tools incl. SQL database, Presto, and Snowflake Streaming, Serverless Architecture

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 11 Lacs

Hyderabad

Work from Office

Immediate Job Openings on #Big Data Engineer _ Pan India_ Contract Experience: 6 +Years Skill:Big Data Engineer Location: Pan India Notice Period:Immediate. Employment Type: Contract Pyspark Azure Data Bricks Experience on workflows Unity catalog Managed / external data with delta tables.

Posted 1 month ago

Apply

6.0 - 8.0 years

25 - 30 Lacs

Bengaluru

Work from Office

6+ years of experience in information technology, Minimum of 3-5 years of experience in managing and administering Hadoop/Cloudera environments. Cloudera CDP (Cloudera Data Platform), Cloudera Manager, and related tools. Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Impala, etc.). Linux system administration with experience with scripting languages (Python, Bash, etc.) and configuration management tools (Ansible, Puppet, etc.) Tools like Kerberos, Ranger, Sentry), Docker, Kubernetes, Jenkins Cloudera Certified Administrator for Apache Hadoop (CCAH) or similar certification. Cluster Management, Optimization, Best practice implementation, collaboration and support.

Posted 1 month ago

Apply

7.0 - 12.0 years

11 - 15 Lacs

Gurugram

Work from Office

Project description We are looking for an experienced Data Engineer to contribute to the design, development, and maintenance of our database systems. This role will work closely with our software development and IT teams to ensure the effective implementation and management of database solutions that align with client's business objectives. Responsibilities The successful candidate would be responsible for managing technology in projects and providing technical guidance/solutions for work completion: (1.) To be responsible for providing technical guidance/solutions (2.) To ensure process compliance in the assigned module and participate in technical discussions/reviews (3.) To prepare and submit status reports for minimizing exposure and risks on the project or closure of escalations (4.) Being self-organized, focused on develop on time and quality software Skills Must have At least 7 years of experience in development in Data specific projects. Must have working knowledge of streaming data Kafka Framework (kSQL/Mirror Maker etc) Strong programming skills in at least one of these programming language Groovy/Java Good knowledge of Data Structure, ETL Design, and storage. Must have worked in streaming data environments and pipelines Experience working in near real-time/Streaming Data pipeline development using Apache Spark/Streamsets/ Apache NIFI or similar frameworks Nice to have N/A Other Languages EnglishB2 Upper Intermediate Seniority Senior

Posted 1 month ago

Apply

10.0 - 20.0 years

15 - 30 Lacs

Vijayawada

Work from Office

Experience Requirements: 10+ years of experience in managing Data and Analytical projects, ideally in a leadership or project management role. Proven experience leading teams in AI/ML, data analytics, or software development environments. Strong understanding of machine learning algorithms, deep learning models, and data processing techniques. Experience in working with Oracle, Coudera and similar tools. Skills and Competencies: Excellent project management skills, with experience using Agile/Scrum methodologies. Strong Team Leadership skills analytical and problem-solving skills. Excellent communication skills, with the ability to translate technical concepts to non-technical stakeholders. Knowledge of data governance and data privacy regulations is a plus. Ability to manage multiple projects simultaneously and work in a fast-paced environment. Preferred Qualifications: PMP certification. Certifications in Data Management and related fields. Previous experience in data curation, meta data and data dictionary is helpful

Posted 1 month ago

Apply

5.0 - 8.0 years

15 - 25 Lacs

Chennai, Bengaluru

Work from Office

Job Profile: We are seeking an experienced Informatica BDM Developer to join our data engineering team. The ideal candidate will act as a primary point of contact for technical support, troubleshoot complex issues across big data platforms, and deliver solutions efficiently. This role requires hands-on expertise in Informatica BDM, Cloudera, Spark, and Python, along with a solid understanding of ITIL processes. Serve as the first point of contact for customers seeking technical assistance via phone, email, or ITIL tools. Diagnose and resolve medium-complexity technical issues in Informatica BDM, Cloudera, Spark, Python, and related data engineering technologies. Perform remote troubleshooting using diagnostic techniques and detailed questioning. Determine the best solution based on the issue and customer-provided details, ensuring high customer satisfaction. Escalate unresolved issues to higher support tiers when necessary. Walk customers through problem-solving processes and provide step-by-step technical help. Document all support activities including issues, resolutions, and actions taken in logs. Support BAU (Business As Usual), DPO, and DPS services with accurate and timely responses. Carry out incident management tasks including configuration, basic-to-medium tuning, and operational support in low-risk environments. Create and maintain operational documentation and incident/change records. Mentor junior team members and assist in knowledge transfer. Collaborate with infrastructure teams to coordinate maintenance activities and ensure system stability. Apply ITIL v3 methodologies for effective incident, problem, and change management. Candidates Profile BE/B Tech, BCA/MCA with 5+ years experience with Informatica BDM and PowerCenter . Ready for 6 months contract role in Chennai in Hybrid mode Can join within 15 days Proficient in Cloudera (Hadoop), Spark , and Python . Understanding of ETL processes , data pipelines , and big data architectures . Familiarity with incident management and ITIL-based support workflows. Excellent problem-solving skills with a proactive mindset. Strong verbal and written communication skills. Ability to handle pressure and resolve customer issues effectively.

Posted 1 month ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Mumbai

Work from Office

Role: Python Technical Expertise - Expertise in PySpark, database migration, transformation, and integration for data warehousing. - Strong knowledge of Apache Spark and Python programming. - Experience in developing data processing tasks using PySpark (data reading, merging, enrichment, loading). - Familiarity with deployment tools (e. g. , Airflow, Control-M) and Unix/Linux Shell scripting. - Skills in advanced data modeling and processing unstructured data. - Hands-on experience with Jupyter Notebook, Zeppelin, and PyCharm. - Proficient in AWS S3 filesystem operations. - Knowledge of Hadoop, Hive, and Cloudera/Hortonworks Data Platforms. Contributing Responsibilities - Extensive experience with Processing Framework (Spark 2. x/3. x), including Spark SQL and Streaming. - Strong capabilities in RDBMS (Postgres, Oracle) and NoSQL databases. - Familiarity with streaming platforms like Apache Kafka and Spark Streaming. - Experience designing and executing data pipelines using ETL/ELT tools. - In-depth knowledge of Big Data Hadoop, particularly HDP/CDH Migration to Cloudera CDP platform. - Ability to optimize and troubleshoot PySpark applications for performance. Technical & Behavioral Competencies - Minimum 5 years of experience with PySpark, Kubernetes, and Docker. - Strong design knowledge in data warehousing concepts. - Proficient in Unix/Ubuntu scripting and tuning code for large data volumes. - Capable of translating functional requirements into technical specifications. - Involved in testing PySpark modules, ETL mappings, and ensuring client satisfaction. - Experienced in coding, implementing, debugging, and documenting complex programs. - Responsible for technical documentation and business needs analysis. - Provides technical guidance and resolves programming-related issues.

Posted 1 month ago

Apply

5.0 - 10.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Big Data Developer - Spark,Scala,Pyspark BigDataDeveloper - Spark, Scala, Pyspark Coding & scripting Years of Experience5 to 12 years LocationBangalore Notice Period0 to 30 days Key Skills: - Proficient in Spark,Scala,Pyspark coding & scripting - Fluent inbigdataengineering development using the Hadoop/Spark ecosystem - Hands-on experience inBigData - Good Knowledge of Hadoop Eco System - Knowledge of cloud architecture AWS -Dataingestion and integration into theDataLake using the Hadoop ecosystem tools such as Sqoop, Spark, Impala, Hive, Oozie, Airflow etc. - Candidates should be fluent in the Python / Scala language - Strong communication skills 2. Perform coding and ensure optimal software/ module development Determine operational feasibility by evaluating analysis, problem definition, requirements, software development and proposed software Develop and automate processes for software validation by setting up and designing test cases/scenarios/usage cases, and executing these cases Modifying software to fix errors, adapt it to new hardware, improve its performance, or upgrade interfaces. Analyzing information to recommend and plan the installation of new systems or modifications of an existing system Ensuring that code is error free or has no bugs and test failure Preparing reports on programming project specifications, activities and status Ensure all the codes are raised as per the norm defined for project / program / account with clear description and replication patterns Compile timely, comprehensive and accurate documentation and reports as requested Coordinating with the team on daily project status and progress and documenting it Providing feedback on usability and serviceability, trace the result to quality risk and report it to concerned stakeholders 3. Status Reporting and Customer Focus on an ongoing basis with respect to project and its execution Capturing all the requirements and clarifications from the client for better quality work Taking feedback on the regular basis to ensure smooth and on time delivery Participating in continuing education and training to remain current on best practices, learn new programming languages, and better assist other team members. Consulting with engineering staff to evaluate software-hardware interfaces and develop specifications and performance requirements Document and demonstrate solutions by developing documentation, flowcharts, layouts, diagrams, charts, code comments and clear code Documenting very necessary details and reports in a formal way for proper understanding of software from client proposal to implementation Ensure good quality of interaction with customer w.r.t. e-mail content, fault report tracking, voice calls, business etiquette etc Timely Response to customer requests and no instances of complaints either internally or externally Deliver No. Performance Parameter Measure 1. Continuous Integration, Deployment & Monitoring of Software 100% error free on boarding & implementation, throughput %, Adherence to the schedule/ release plan 2. Quality & CSAT On-Time Delivery, Manage software, Troubleshoot queries,Customer experience, completion of assigned certifications for skill upgradation 3. MIS & Reporting 100% on time MIS & report generation Mandatory Skills: Python for Insights. Experience5-8 Years.

Posted 1 month ago

Apply

8.0 - 11.0 years

45 - 50 Lacs

Noida, Kolkata, Chennai

Work from Office

Dear Candidate, We are hiring a Julia Developer to build computational and scientific applications requiring speed and mathematical accuracy. Ideal for domains like finance, engineering, or AI research. Key Responsibilities: Develop applications and models using the Julia programming language . Optimize for performance, parallelism, and numerical accuracy . Integrate with Python or C++ libraries where needed. Collaborate with data scientists and engineers on simulations and modeling. Maintain well-documented and reusable codebases. Required Skills & Qualifications: Proficient in Julia , with knowledge of multiple dispatch and type system Experience in numerical computing or scientific research Familiarity with Plots.jl, Flux.jl, or DataFrames.jl Understanding of Python, R, or MATLAB is a plus Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Hyderabad

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Cloudera Data Platform Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing innovative solutions, and ensuring that applications are aligned with business objectives. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate knowledge sharing sessions to enhance team capabilities.- Monitor project progress and ensure timely delivery of application features. Professional & Technical Skills: - Must To Have Skills: Proficiency in Cloudera Data Platform.- Good To Have Skills: Experience with data integration tools and frameworks.- Strong understanding of application development methodologies.- Experience with cloud-based application deployment.- Familiarity with database management and optimization techniques. Additional Information:- The candidate should have minimum 5 years of experience in Cloudera Data Platform.- This position is based in Hyderabad.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5.0 - 10.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Linux Operations Good to have skills : Red Hat OS AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Infra Tech Support Practitioner, you will provide ongoing technical support and maintenance of production and development systems and software products both remote and onsite. You will work within a defined operating model and processes, implementing technology at the operating system-level across all server and network areas. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Implement hardware/software support- Perform L1 and L2/ basic and intermediate level troubleshooting- Ensure smooth operation of production and development systems Professional & Technical Skills: - Must To Have Skills: Proficiency in Linux Operations- Good To Have Skills: Experience with Red Hat OS Administration- Strong understanding of system administration- Knowledge of network protocols and configurations- Experience in troubleshooting server and network issues Additional Information:- The candidate should have a minimum of 5 years of experience in Linux Operations- This position is based at our Bengaluru office- A 15 years full time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

8.0 - 11.0 years

35 - 37 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Dear Candidate, We are hiring a Data Engineer to build and maintain data pipelines for our analytics platform. Perfect for engineers focused on data processing and scalability. Key Responsibilities: Design and implement ETL processes Manage data warehouses and ensure data quality Collaborate with data scientists to provide necessary data Optimize data workflows for performance Required Skills & Qualifications: Proficiency in SQL and Python Experience with data pipeline tools like Apache Airflow Familiarity with big data technologies (Spark, Hadoop) Bonus: Knowledge of cloud data services (AWS Redshift, Google BigQuery) Soft Skills: Strong troubleshooting and problem-solving skills. Ability to work independently and in a team. Excellent communication and documentation skills. Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 17 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Key Skill: Pyspark, Cloudera Data Platfrorm, Big data Hadoop, Hive, Kafka Responsibilities Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Technical Skills 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform : Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies : Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation : Strong scripting skills in Linux

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies