Home
Jobs

383 Hbase Jobs - Page 10

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6 - 11 years

15 - 25 Lacs

Chandigarh

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Rajkot

Work from Office

Naukri logo

Pharma exp MUST Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Coimbatore

Work from Office

Naukri logo

Pharma exp MUST Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Salem

Work from Office

Naukri logo

Azure Data Factory ETL Consultant - Pharma exp MUST Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Kota

Work from Office

Naukri logo

Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Nasik

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins / Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

2 - 4 years

4 - 6 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a Production Support Engineer to analyze and resolve runtime issues, collaborate with developers for complex problem resolution, and automate manual tasks through scripting. The ideal candidate will have experience in monitoring production environments, troubleshooting, and scripting automation for reporting and maintenance. Key Responsibilities: Analyze runtime issues, diagnose problems, and implement code fixes for low to medium complexity. Collaborate with developers to identify and resolve more complex issues. Address urgent issues efficiently while adhering to customer SLAs. Adapt and modify installers, shell scripts, and Perl scripts, automating repetitive tasks. Develop automation scripts for reporting, maintenance, and anomaly detection where applicable. Gather and relay user feedback to the development team. Maintain a record of problem analysis and resolution activity in an on-call tracking system. Proactively monitor production and non-production environments, ensuring stability and fixing issues. Independently identify and resolve issues before they impact production. Develop and enhance smaller system components as needed. Desired Skills & Qualifications: Strong SQL skills with experience in MySQL and query writing. Proficiency in automation scripting using Python, Perl, Shell Scripting (advanced tools preferred). Prior experience in production support. Hands-on programming experience, preferably in Python. Strong written and oral communication skills. Familiarity with Storm, Zookeeper, Kafka, ElasticSearch, Aerospike, Redis, HBase, Aesop, MySQL.

Posted 2 months ago

Apply

1 - 6 years

2 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Sahaj Retail Limited is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey. Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 2 months ago

Apply

1 - 5 years

3 - 7 Lacs

Allahabad, Noida

Work from Office

Naukri logo

Feather Thread Corporation is looking for Bigdata administrator to join our dynamic team and embark on a rewarding career journey. Office Management:Oversee general office operations, including maintenance of office supplies, equipment, and facilities Manage incoming and outgoing correspondence, including mail, email, and phone calls Coordinate meetings, appointments, and travel arrangements for staff members as needed Administrative Support:Provide administrative support to management and staff, including scheduling meetings, preparing documents, and organizing files Assist with the preparation of reports, presentations, and other materials for internal and external stakeholders Maintain accurate records and databases, ensuring data integrity and confidentiality Communication and Coordination:Serve as a point of contact for internal and external stakeholders, including clients, vendors, and partners Facilitate communication between departments and team members, ensuring timely and effective information flow Coordinate logistics for company events, meetings, and conferences Documentation and Compliance:Assist with the development and implementation of company policies, procedures, and guidelines Maintain compliance with regulatory requirements and industry standards Ensure proper documentation and record-keeping practices are followed Project Support:Provide support to project teams by assisting with project coordination, documentation, and tracking of tasks and deadlines Collaborate with team members to ensure project deliverables are met on time and within budget

Posted 2 months ago

Apply

8 - 10 years

10 - 12 Lacs

Chennai

Work from Office

Naukri logo

As a Staff Data Scientist at FourKites, you will collaborate with a team of data scientists and engineers with diverse technical expertise to develop cutting-edge AI-driven solutions. Your primary focus will be leveraging advanced analytics and machine learning to tackle complex challenges in supply chain visibility, driving innovation in predictive modeling, optimization, and real-time decision-making. In this role, you will have the opportunity to design, implement and deploy scalable AI-powered products, pioneering industry-first solutions that address critical customer pain points in supply chain operations. Utilizing billions of historical data points from FourKites Big Data platform, spanning multiple geographies and transportation modes, you will be responsible for building robust, scalable, and production-ready machine learning models that directly impact the global logistics ecosystem. What you ll be doing: Be a technical thought leader in collaboration with engineering and product leadership, helping to set the strategy and standards for developing Machine Learning and data-driven products at Fourkites. Develop predictive models to optimize supply chain operations and route planning Design and implement machine learning solutions for real-time logistics decision-making. Analyze large datasets from multiple sources to identify patterns, inefficiencies, and opportunities for improvement. Collaborate with cross-functional teams (Engineering, Product, and Business) to integrate data-driven solutions into business processes. Lead technical strategy and mentor junior data scientists, setting best practices for data science methodologies and deployment. Build and optimize algorithms for ETAs, shipment tracking, vessel scheduling and milestone inferences. Develop data pipelines and feature engineering strategies in collaboration with data engineering teams. Communicate insights and recommendations to senior leadership, translating complex data findings into actionable business strategies. Stay updated with the latest trends in AI/ML, data optimization, and data science applications in logistics. Who you are: 8+ years of experience in data science, machine learning, or applied analytics, preferably in logistics, supply chain, or related industries. Strong expertise in machine learning, deep learning, statistical modeling, artificial intelligence and optimization techniques. Proficiency in Python, Java, SQL, and distributed computing frameworks (Spark). Experience with ML frameworks like TensorFlow, PyTorch, Scikit-learn, XGBoost Hands on knowledge working with different databases (Postgres, MongoDB, HBase, Prometheus) Solid understanding of operations research and optimization algorithms (e.g., linear programming, reinforcement learning). Hands-on experience working with large-scale data pipelines (KStreams, Delta Lake, Redshift, Athena) and cloud platforms (AWS, Azure). Ability to translate business problems into structured and scalable data science solutions. Strong problem-solving skills and ability to work with ambiguous requirements. Excellent communication skills with the ability to influence stakeholders and present data-driven insights. Preferred Qualification: Masters or PhD degree in Computer Science, Artificial Intelligence, Machine Learning, or related technical fields. The ideal candidate must have expertise in developing products based on Artificial Intelligence (including Deep Learning algorithms), Machine Learning in NLP Experience in building, deploying and maintaining Machine Learning models in production environments Experience in working with Geospatial data

Posted 2 months ago

Apply

8 - 13 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities Power Programmer is an important initiative within Global Delivery to develop a team of Full Stack Developers who will be working on complex engineering projects, platforms and marketplaces for our clients using emerging technologies., They will be ahead of the technology curve and will be constantly enabled and trained to be Polyglots., They are Go-Getters with a drive to solve end customer challenges and will spend most of their time in designing and coding, End to End contribution to technology oriented development projects., Providing solutions with minimum system requirements and in Agile Mode., Collaborate with Power Programmers., Open Source community and Tech User group., Custom Development of new Platforms & Solutions ,Opportunities., Work on Large Scale Digital Platforms and marketplaces., Work on Complex Engineering Projects using cloud native architecture ., Work with innovative Fortune 500 companies in cutting edge technologies., Co create and develop New Products and Platforms for our clients., Contribute to Open Source and continuously upskill in latest technology areas., Incubating tech user group Technical and Professional Requirements: Bigdata Spark, scala, hive, kafka Preferred Skills: Technology->Big Data->Big Data - ALL Technology->Big Data->Hbase Technology->Big Data->Oozie Technology->Big Data->Sqoop Technology->Functional Programming->Scala Technology->Big Data - Data Processing->Spark->Spark Sreaming Educational Requirements Bachelor of Engineering Service Line Strategic Technology Group * Location of posting is subject to business requirements

Posted 2 months ago

Apply

5 - 9 years

7 - 11 Lacs

Pune, Hyderabad, Noida

Work from Office

Naukri logo

Skills : Bigdata, Scala & Spark, Cloud ( AWS OR Azure) Bigdata Technologies - Lead Data Engineer (8+ Years of overall experience in Data Engineering across Enterprise Data Platforms, Data Hub, Lambda Architecture & Cloud technologies) Primary - Spark including streaming, Scala, Hadoop, Hbase, Kafka, Delta (preferably CDP), SQL. Knowledge / Experience of Azure Cloud Secondary / Good to have - Elastic Search, Data bricks, ADF, R, Python Location: Noida, Hyderabad, Pune Experience 5. 5 years to 8 years Spark, Scala

Posted 2 months ago

Apply

6 - 8 years

8 - 10 Lacs

Mysore

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the worlds technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 months ago

Apply

4 - 7 years

6 - 8 Lacs

Chennai

Work from Office

Naukri logo

Can you say Yes, I have! to the following? Good understanding of distributed system architecture, data lake design and best practices Working knowledge of cloud-based deployments in AWS, Azure or GCP Coding proficiency in at least one programming language (Scala, Python, Java) Experience in Airflow is preferred Experience in data warehousing, relational database architectures (Oracle, SQL, DB2, Teradata) Expertise in Big Data storage and processing platform (Hadoop, Spark, Hive, HBASE) Skills: Problem solver, fast learner, energetic and enthusiastic Self-motivated and highly professional, with the ability to lead, and take ownership and responsibility Adaptable and flexible to business demands Can you say Yes, I will! to the following? Lead analytical projects and deliver value to customers Coordinate individual teams to fulfil client requirements and manage deliverables Communicate and present complex concepts to business audiences Manage and strategize business from an analytics point of view Travel to client locations when necessary Design algorithms for product development and build analytics-based products

Posted 2 months ago

Apply

5 - 10 years

8 - 14 Lacs

Kota

Work from Office

Naukri logo

- PhD or MS in Computer Science, Computational Linguistics, Artificial Intelligence with a heavy focus on NLP/Text mining with 5 years of relevant industry experience. - Creativity, resourcefulness, and a collaborative spirit. - Knowledge and working experience in one or more of the following areas: Natural Language Processing, Clustering and Classifications of Text, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, Knowledge Engineering, Search Rank and Recommendation. - Deep experience with text-wrangling and pre-processing skills such as document parsing and cleanup, vectorization, tokenization, language modeling, phrase detection, etc. - Proficient programming skills in a high-level language (e.g. Python, R, Java, Scala) - Being comfortable with rapid prototyping practices. - Being comfortable with developing clean, production-ready code. - Being comfortable with pre-processing unstructured or semi-structured data. - Experience with statistical data analysis, experimental design, and hypothesis validation. Project-based experience with some of the following tools: - Natural Language Processing (e.g. Spacy, NLTK, OpenNLP or similar) - Applied Machine Learning (e.g. Scikit-learn, SparkML, H2O or similar) - Information retrieval and search engines (e.g. Elasticsearch/ELK, Solr/Lucene) - Distributed computing platforms, such as Spark, Hadoop (Hive, Hbase, Pig), GraphLab - Databases ( traditional and NOSQL) - Proficiency in traditional Machine Learning models such as LDA/topic modeling, graphical models, etc. -Familiarity with Deep Learning architectures and frameworks such as Pytorch, Tensorflow, Keras.

Posted 2 months ago

Apply

5 - 10 years

8 - 14 Lacs

Kolkata

Work from Office

Naukri logo

- PhD or MS in Computer Science, Computational Linguistics, Artificial Intelligence with a heavy focus on NLP/Text mining with 5 years of relevant industry experience. - Creativity, resourcefulness, and a collaborative spirit. - Knowledge and working experience in one or more of the following areas: Natural Language Processing, Clustering and Classifications of Text, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, Knowledge Engineering, Search Rank and Recommendation. - Deep experience with text-wrangling and pre-processing skills such as document parsing and cleanup, vectorization, tokenization, language modeling, phrase detection, etc. - Proficient programming skills in a high-level language (e.g. Python, R, Java, Scala) - Being comfortable with rapid prototyping practices. - Being comfortable with developing clean, production-ready code. - Being comfortable with pre-processing unstructured or semi-structured data. - Experience with statistical data analysis, experimental design, and hypothesis validation. Project-based experience with some of the following tools: - Natural Language Processing (e.g. Spacy, NLTK, OpenNLP or similar) - Applied Machine Learning (e.g. Scikit-learn, SparkML, H2O or similar) - Information retrieval and search engines (e.g. Elasticsearch/ELK, Solr/Lucene) - Distributed computing platforms, such as Spark, Hadoop (Hive, Hbase, Pig), GraphLab - Databases ( traditional and NOSQL) - Proficiency in traditional Machine Learning models such as LDA/topic modeling, graphical models, etc. -Familiarity with Deep Learning architectures and frameworks such as Pytorch, Tensorflow, Keras.

Posted 2 months ago

Apply

2 - 6 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Developed the Pysprk code for AWS Glue jobs and for EMR. Worked on scalable distributed data system using Hadoop ecosystem in AWS EMR, MapR distribution.. Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Hadoop streaming Jobs using python for integrating python API supported applications.. Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Re - write some Hive queries to Spark SQL to reduce the overall batch time Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 2 months ago

Apply

4 - 9 years

12 - 16 Lacs

Kochi

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 6 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Preferred technical and professional experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers

Posted 2 months ago

Apply

4 - 9 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 2 months ago

Apply

4 - 9 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 2 months ago

Apply

4 - 9 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers

Posted 2 months ago

Apply

2 - 5 years

14 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

As an Application Developer, you will lead IBM into the future by translating system requirements into the design and development of customized systems in an agile environment. The success of IBM is in your hands as you transform vital business needs into code and drive innovation. Your work will power IBM and its clients globally, collaborating and integrating code into enterprise systems. You will have access to the latest education, tools and technology, and a limitless career path with the world’s technology leader. Come to IBM and make a global impact Responsibilities: Responsible to manage end to end feature development and resolve challenges faced in implementing the same Learn new technologies and implement the same in feature development within the time frame provided Manage debugging, finding root cause analysis and fixing the issues reported on Content Management back end software system fixing the issues reported on Content Management back end software system Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall, more than 6 years of experience with more than 4+ years of Strong Hands on experience in Python and Spark Strong technical abilities to understand, design, write and debug to develop applications on Python and Pyspark. Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure strong problem-solving skill Preferred technical and professional experience Good to Have;- Hands on Experience on cloud technology AWS/GCP/Azure

Posted 2 months ago

Apply

5 - 10 years

14 - 17 Lacs

Kochi

Work from Office

Naukri logo

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 2 months ago

Apply

2 - 5 years

14 - 17 Lacs

Pune

Work from Office

Naukri logo

Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 3-5 years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"

Posted 2 months ago

Apply

Exploring HBase Jobs in India

HBase is a distributed, scalable, and NoSQL database that is commonly used in big data applications. As the demand for big data solutions continues to grow, so does the demand for professionals with HBase skills in India. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries and sectors.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi-NCR

These cities are known for their strong presence in the IT industry and are actively hiring professionals with HBase skills.

Average Salary Range

The salary range for HBase professionals in India can vary based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

In the HBase domain, a typical career progression may look like: - Junior HBase Developer - HBase Developer - Senior HBase Developer - HBase Architect - HBase Administrator - HBase Consultant - HBase Team Lead

Related Skills

In addition to HBase expertise, professionals in this field are often expected to have knowledge of: - Apache Hadoop - Apache Spark - Data Modeling - Java programming - Database design - Linux/Unix

Interview Questions

  • What is HBase and how does it differ from traditional RDBMS? (basic)
  • Explain the architecture of HBase. (medium)
  • How does data replication work in HBase? (medium)
  • What is the role of HMaster in HBase? (basic)
  • How can you improve the performance of HBase? (medium)
  • What are the different types of filters in HBase? (medium)
  • Explain the concept of HBase coprocessors. (advanced)
  • How does compaction work in HBase? (medium)
  • What is the purpose of the WAL in HBase? (basic)
  • Can you explain the difference between HBase and Cassandra? (medium)
  • What is the role of ZooKeeper in HBase? (basic)
  • How does data retrieval work in HBase? (medium)
  • What is a region server in HBase? (basic)
  • Explain the concept of bloom filters in HBase. (medium)
  • How does HBase ensure data consistency? (medium)
  • What is the significance of column families in HBase? (basic)
  • How do you handle schema changes in HBase? (medium)
  • Explain the concept of cell-level security in HBase. (advanced)
  • What are the different modes of data loading in HBase? (medium)
  • How does HBase handle data storage internally? (medium)
  • What is the purpose of the HFile in HBase? (basic)
  • How can you monitor the performance of HBase? (medium)
  • What is the role of the MemStore in HBase? (basic)
  • How does HBase handle data distribution and load balancing? (medium)
  • Explain the process of data deletion in HBase. (medium)

Closing Remark

As you prepare for HBase job opportunities in India, make sure to brush up on your technical skills, practice coding exercises, and be ready to showcase your expertise in interviews. With the right preparation and confidence, you can land a rewarding career in the exciting field of HBase. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies