Home
Jobs

2453 Hive Jobs - Page 31

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 1 week ago

Apply

5.0 - 7.0 years

14 - 18 Lacs

Mumbai

Work from Office

Naukri logo

Work with broader team to build, analyze and improve the AI solutions. You will also work with our software developers in consuming different enterprise applications Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Resource should have 5-7 years of experience. Sound knowledge of Python and should know how to use the ML related services. Proficient in Python with focus on Data Analytics Packages. Strategy Analyse large, complex data sets and provide actionable insights to inform business decisions. Strategy Design and implementing data models that help in identifying patterns and trends. Collaboration Work with data engineers to optimize and maintain data pipelines. Perform quantitative analyses that translate data into actionable insights and provide analytical, data-driven decision-making. Identify and recommend process improvements to enhance the efficiency of the data platform. Develop and maintain data models, algorithms, and statistical models Preferred technical and professional experience Experience with conversation analytics. Experience with cloud technologies Experience with data exploration tools such as Tableu

Posted 1 week ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Pune

Work from Office

Naukri logo

Translates business needs into a data model, providing expertise on data modeling tools and techniques for designing data models for applications and related systems. Skills include logical and physical data modeling, and knowledge of ERWin, MDM, and/or ETL. Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Translates business needs into a data model, providing expertise on data modeling tools and techniques for designing data models for applications and related systems. Skills include logical and physical data modeling, and knowledge of ERWin, MDM, and/or ETL. Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations. Therefore, the process of data modeling involves professional data modelers working closely with business stakeholders, as well as potential users of the information system. There are three different types of data models produced while progressing from requirements to the actual database to be used for the information system. Preferred technical and professional experience Translates business needs into a data model, providing expertise on data modeling tools and techniques for designing data models for applications and related systems. Skills include logical and physical data modeling, and knowledge of ERWin, MDM, and/or ETL. Data modeling is a process used to define and analyze data requirements needed to support the business processes within the scope of corresponding information systems in organizations.

Posted 1 week ago

Apply

6.0 - 11.0 years

13 - 17 Lacs

Bengaluru

Work from Office

Naukri logo

6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool Worked on Big Query and GCP technologies Strong SQL and Spark knowledge Excellent Data Analysis skills. Must be comfortable with querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Knowledge of Financial Accounting is a bonus Work independently with cross functional team and drive towards the resolution Experience with Object oriented programming using python and its design patterns Experience handling Unix systems, for optimal usage to host enterprise web applications GCP certifications preferred. Payments Industry Background good to have Candidate who has been part to google Cloud Migration is an ideal Fit Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 3-5 years of experience Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience 6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool

Posted 1 week ago

Apply

2.0 - 5.0 years

13 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

15.0 - 20.0 years

6 - 10 Lacs

Mumbai

Work from Office

Naukri logo

LocationMumbai Experience15+ years in data engineering/architecture Role Overview: Lead the architectural design and implementation of a secure, scalable Cloudera-based Data Lakehouse for one of India’s top public sector banks. Key Responsibilities: * Design end-to-end Lakehouse architecture on Cloudera * Define data ingestion, processing, storage, and consumption layers * Guide data modeling, governance, lineage, and security best practices * Define migration roadmap from existing DWH to CDP * Lead reviews with client stakeholders and engineering teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required: * Proven experience with Cloudera CDP, Spark, Hive, HDFS, Iceberg * Deep understanding of Lakehouse patterns and data mesh principles * Familiarity with data governance tools (e.g., Apache Atlas, Collibra) * Banking/FSI domain knowledge highly desirable

Posted 1 week ago

Apply

6.0 - 11.0 years

13 - 17 Lacs

Gurugram

Work from Office

Naukri logo

6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool Worked on Big Query and GCP technologies Strong SQL and Spark knowledge Excellent Data Analysis skills. Must be comfortable with querying and analyzing large amount of data on Hadoop HDFS using Hive and Spark Knowledge of Financial Accounting is a bonus Work independently with cross functional team and drive towards the resolution Experience with Object oriented programming using python and its design patterns Experience handling Unix systems, for optimal usage to host enterprise web applications GCP certifications preferred. Payments Industry Background good to have Candidate who has been part to google Cloud Migration is an ideal Fit Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 3-5 years of experience Intuitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications Preferred technical and professional experience 6+ years of industry work experience Experience extracting data from a variety of sources, and a desire to expand those skills Worked on Google Looker tool

Posted 1 week ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies. Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers.

Posted 1 week ago

Apply

8.0 - 13.0 years

5 - 8 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : Seeking an experienced Apache Airflow specialist to design and manage data orchestration pipelines for batch/streaming workflows in a Cloudera environment. Key Responsibilities : Design, schedule, and monitor DAGs for ETL/ELT pipelines Integrate Airflow with Cloudera services and external APIs Implement retries, alerts, logging, and failure recovery Collaborate with data engineers and DevOps teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Skills Required : Experience 3–8 years Expertise in Airflow 2.x, Python, Bash Knowledge of CI/CD for Airflow DAGs Proven experience with Cloudera CDP, Spark/Hive-based data pipelines Integration with Kafka, REST APIs, databases

Posted 1 week ago

Apply

2.0 - 5.0 years

13 - 17 Lacs

Gurugram

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In this role, your responsibilities may include Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

3.0 - 8.0 years

9 - 13 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 3-15 Years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 1 week ago

Apply

7.0 - 8.0 years

15 - 16 Lacs

Pune

Work from Office

Naukri logo

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: This role will be responsible for: Develop and support new feeds ingestion / understand the existing framework and do the development as per the business rules and requirements. Development and maintenance of new changes / enhancements in Data Ingestion / Juniper and promoting and supporting those in the production environment within the stipulated timelines. Need to get familiar with the Data Ingestion / Data Refinery / Common Data Model / Compdata frameworks quickly and contribute to the application development as soon as possible. Methodical and measured approach with a keen eye for attention to detail; Ability to work under pressure and remain calm in the face of adversity; Ability to collaborate, interact and engage with different business, technical and subject matter experts; Good, concise, written and verbal communication Ability to manage workload from multiple requests and to balance priorities; Pro-active, a can do mind-set and attitude; Good documentation skills Requirements To be successful in this role, you should meet the following requirements: Experience (1 = essential, 2 = very useful, 3 = nice to have): 1. Hadoop / Hive / GCP 2. Agile / Scrum 3. LINUX Technical skills (1 = essential, 2 = useful, 3 = nice to have): 1. Any ETL tool 1. Analytical trouble shooting. 2. Hive QL 1. On-Prem / Cloud infra knowledge

Posted 1 week ago

Apply

3.0 - 7.0 years

10 - 15 Lacs

Pune

Work from Office

Naukri logo

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: This role will be responsible for: Develop and support new feeds ingestion / understand the existing framework and do the development as per the business rules and requirements. Development and maintenance of new changes / enhancements in Data Ingestion / Juniper and promoting and supporting those in the production environment within the stipulated timelines. Need to get familiar with the Data Ingestion / Data Refinery / Common Data Model / Compdata frameworks quickly and contribute to the application development as soon as possible. Methodical and measured approach with a keen eye for attention to detail; Ability to work under pressure and remain calm in the face of adversity; Ability to collaborate, interact and engage with different business, technical and subject matter experts; Good, concise, written and verbal communication Ability to manage workload from multiple requests and to balance priorities; Pro-active, a can do mind-set and attitude; Good documentation skills Requirements To be successful in this role, you should meet the following requirements: Experience (1 = essential, 2 = very useful, 3 = nice to have): 1. Hadoop / Hive / GCP 2. Agile / Scrum 3. LINUX Technical skills (1 = essential, 2 = useful, 3 = nice to have): 1. Any ETL tool 1. Analytical trouble shooting. 2. Hive QL 1. On-Prem / Cloud infra knowledge

Posted 1 week ago

Apply

7.0 - 8.0 years

15 - 16 Lacs

Hyderabad

Work from Office

Naukri logo

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: This role will be responsible for: Develop and support new feeds ingestion / understand the existing framework and do the development as per the business rules and requirements. Development and maintenance of new changes / enhancements in Data Ingestion / Juniper and promoting and supporting those in the production environment within the stipulated timelines. Need to get familiar with the Data Ingestion / Data Refinery / Common Data Model / Compdata frameworks quickly and contribute to the application development as soon as possible. Methodical and measured approach with a keen eye for attention to detail; Ability to work under pressure and remain calm in the face of adversity; Ability to collaborate, interact and engage with different business, technical and subject matter experts; Good, concise, written and verbal communication Ability to manage workload from multiple requests and to balance priorities; Pro-active, a can do mind-set and attitude; Good documentation skills Requirements To be successful in this role, you should meet the following requirements: Experience (1 = essential, 2 = very useful, 3 = nice to have): 1. Hadoop / Hive / GCP 2. Agile / Scrum 3. LINUX Technical skills (1 = essential, 2 = useful, 3 = nice to have): 1. Any ETL tool 1. Analytical trouble shooting. 2. Hive QL 1. On-Prem / Cloud infra knowledgeYou ll achieve more when you join HSBC.

Posted 1 week ago

Apply

2.0 - 5.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong experience in SQL. Strong experience in DBT. Strong experience in Data warehousing concepts. Strong experience in AWS or any other Cloud knowledge. Redshift is good to have Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

5.0 - 7.0 years

12 - 16 Lacs

Bengaluru

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS; Exposure to streaming solutions and message brokers like Kafka technologies. Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers AWS S3 , Redshift , and EMR for data storage and distributed processing. AWS Lambda , AWS Step Functions , and AWS Glue to build serverless, event-driven data workflows and orchestrate ETL processes

Posted 1 week ago

Apply

2.0 - 5.0 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our client’s business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Design, develop, and maintain Ab Initio graphs for extracting, transforming, and loading (ETL) data from diverse sources to various target systems. Implement data quality and validation processes within Ab Initio. Data Modeling and Analysis:. Collaborate with data architects and business analysts to understand data requirements and translate them into effective ETL processes. Analyze and model data to ensure optimal ETL design and performance. Ab Initio Components:. . Utilize Ab Initio components such as Transform Functions, Rollup, Join, Normalize, and others to build scalable and efficient data integration solutions. Implement best practices for reusable Ab Initio components Preferred technical and professional experience Optimize Ab Initio graphs for performance, ensuring efficient data processing and minimal resource utilization. Conduct performance tuning and troubleshooting as needed. Collaboration:. . Work closely with cross-functional teams, including data analysts, database administrators, and quality assurance, to ensure seamless integration of ETL processes. Participate in design reviews and provide technical expertise to enhance overall solution quality. Documentation

Posted 1 week ago

Apply

10.0 - 15.0 years

5 - 9 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : We are hiring aTalend Data Quality Developerto design and implement robust data quality (DQ) frameworks in a Cloudera-based data lakehouse environment. The role focuses on building rule-driven validation and monitoring processes for migrated data pipelines, ensuring high levels of data trust and regulatory compliance across critical banking domains. Key Responsibilities : Design and implement data quality rules using Talend DQ Studio , tailored to validate customer, account, transaction, and KYC datasets within the Cloudera Lakehouse. Create reusable templates for profiling, validation, standardization, and exception handling. Integrate DQ checks within PySpark-based ingestion and transformation pipelines targeting Apache Iceberg tables . Ensure compatibility with Cloudera components (HDFS, Hive, Iceberg, Ranger, Atlas) and job orchestration frameworks (Airflow/Oozie). Perform initial and ongoing data profiling on source and target systems to detect data anomalies and drive rule definitions. Monitor and report DQ metrics through dashboards and exception reports. Work closely with data governance, architecture, and business teams to align DQ rules with enterprise definitions and regulatory requirements. Support lineage and metadata integration with tools like Apache Atlas or external catalogs. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 5–10 years in data management, with 3+ years in Talend Data Quality tools. Platforms Experience in Cloudera Data Platform (CDP) , with understanding of Iceberg , Hive , HDFS , and Spark ecosystems. Languages/Tools Talend Studio (DQ module), SQL, Python (preferred), Bash scripting. Data Concepts Strong grasp of data quality dimensions—completeness, consistency, accuracy, timeliness, uniqueness. Banking Exposure Experience with financial services data (CIF, AML, KYC, product masters) is highly preferred.

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 week ago

Apply

15.0 - 20.0 years

5 - 9 Lacs

Mumbai

Work from Office

Naukri logo

Location Mumbai Role Overview : As a Big Data Engineer, you'll design and build robust data pipelines on Cloudera using Spark (Scala/PySpark) for ingestion, transformation, and processing of high-volume data from banking systems. Key Responsibilities : Build scalable batch and real-time ETL pipelines using Spark and Hive Integrate structured and unstructured data sources Perform performance tuning and code optimization Support orchestration and job scheduling (NiFi, Airflow) Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience3–15 years Proficiency in PySpark/Scala with Hive/Impala Experience with data partitioning, bucketing, and optimization Familiarity with Kafka, Iceberg, NiFi is a must Knowledge of banking or financial datasets is a plus

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 1 week ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Naukri logo

Ingest new data from relational and non-relational source database systems into our warehouse. Connect data from various sources. Integrate data from external sources to warehouse by building facts and dimensions based on the EPM data model requirements. Automate data exchange and processing through serverless data pipelines. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in data analysis and integration. Experience in data building and consuming fact and dimension tables. Experience in automating data integration through data pipelines. Experience with object-oriented programing languages such as Python. Experience with structured data processing languages such as SQL and Spark SQL. Experience with REST APIs and JSON Experience in IBM Cloud data processing services such as IBM Code Engine, IBM Event Streams (Apache Kafka). Strong understanding of Datawarehouse concepts and various data warehouse architectures Preferred technical and professional experience Experience with IBM Cloud architecture Experience with DevOps. Knowledge of Agile development methodologies Experience with building containerized applications and running them in serverless environments on the Cloud such as IBM Code Engine, Kubernetes, or Satellite. Experience with IBM Cognitive Enterprise Data Platform and CodeHub. Experience with data integration tools such as IBM DataStage or Informatica

Posted 1 week ago

Apply

2.0 - 3.0 years

2 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

C#, AWS, SQL skill set required with 2-3 years experience and immediate joiner C#, AWS, SQL skill set required with 2-3 years experience and immediate joiner EXPERIENCE 2-3 Years SKILLS Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): C#, Python, AWS - CloudFormation, Apache Hive, SQL

Posted 1 week ago

Apply

6.0 - 11.0 years

14 - 17 Lacs

Pune

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back-end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 1 week ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies