Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5 - 10 years
7 - 12 Lacs
Kochi
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 3 months ago
3 - 5 years
4 - 7 Lacs
Bengaluru
Work from Office
Responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 3-5 years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala"
Posted 3 months ago
2 - 6 years
9 - 13 Lacs
Pune, Mumbai, Gurgaon
Work from Office
Manage ETL pipelines, data engineering operations, and cloud infrastructure Experience in configuring data exchange and transfer methods Experience in orchestrating ETL pipelines with multiple tasks, triggers, and dependencies Strong proficiency with Python and Apache Spark; intermediate or better proficiency with SQL; experience with AWS S3 and EC2, Databricks, Ability to communicate efficiently and translate ideas with technical stakeholders in IT and Data Science Passionate about designing data infrastructure and eager to contribute ideas to help build robust data platforms
Posted 3 months ago
3 - 7 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft Azure Modern Data Platform, Apache Spark Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead for Custom Software Engineering, you will be responsible for designing, building, and configuring applications using Microsoft Azure Databricks. Your typical day will involve leading the effort to deliver high-quality solutions, collaborating with cross-functional teams, and ensuring timely delivery of projects. Roles & Responsibilities: Lead the effort to design, build, and configure applications using Microsoft Azure Databricks. Act as the primary point of contact for the project, collaborating with cross-functional teams to ensure timely delivery of high-quality solutions. Utilize your expertise in Scala Programming Language, Apache Spark, and Microsoft Azure Modern Data Platform to develop and implement efficient and scalable solutions. Ensure adherence to best practices and standards for software development, including code reviews, testing, and documentation.-Build performance-oriented Scala code, optimized for Databricks/Spark execution Provide peer support to other members of Team on Azure data bricks/Spark / Scala best practices Improve Performance of Calculation Engine Develop proof of concepts using new technologies Develop new applications to meet regulatory commitments (e.g:FRTB) Professional & Technical Skills: Proficiency in Scala Programming Language. Experience with Apache Spark and Microsoft Azure Modern Data Platform. Strong understanding of software development best practices and standards. Experience with designing, building, and configuring applications using Microsoft Azure Databricks. Experience with data processing and analysis using big data technologies. Excellent problem-solving and analytical skills. Build performance-oriented Scala code, optimized for Databricks/Spark execution Provide peer support to other members of Team on Azure data bricks/Spark / Scala best practices Improve Performance of Calculation Engine Develop proof of concepts using new technologies Develop new applications to meet regulatory commitments (e.g:FRTB) Qualification 15 years full time education
Posted 3 months ago
3 - 8 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications using PySpark. Your typical day will involve collaborating with cross-functional teams, developing and deploying PySpark applications, and acting as the primary point of contact for the project. Roles & Responsibilities: Lead the effort to design, build, and configure PySpark applications, collaborating with cross-functional teams to ensure project success. Develop and deploy PySpark applications, ensuring adherence to best practices and standards. Act as the primary point of contact for the project, communicating effectively with stakeholders and providing regular updates on project progress. Provide technical guidance and mentorship to junior team members, ensuring their continued growth and development. Stay updated with the latest advancements in PySpark and related technologies, integrating innovative approaches for sustained competitive advantage. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience with Hadoop, Hive, and other Big Data technologies. Solid understanding of software development principles and best practices. Experience with Agile development methodologies. Strong problem-solving and analytical skills. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 3 months ago
5 - 9 years
10 - 14 Lacs
Bengaluru
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : BE Summary :As a Databricks Unified Data Analytics Platform Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with the Databricks Unified Data Analytics Platform, collaborating with cross-functional teams, and ensuring the successful delivery of applications. Roles & Responsibilities: Lead the design, development, and deployment of applications using the Databricks Unified Data Analytics Platform. Act as the primary point of contact for all application-related activities, collaborating with cross-functional teams to ensure successful delivery. Ensure the quality and integrity of applications through rigorous testing and debugging. Provide technical guidance and mentorship to junior team members, fostering a culture of continuous learning and improvement. Professional & Technical Skills: Must To Have Skills:Expertise in the Databricks Unified Data Analytics Platform. Good To Have Skills:Experience with other big data technologies such as Hadoop, Spark, and Kafka. Strong understanding of software engineering principles and best practices. Experience with agile development methodologies and tools such as JIRA and Confluence. Proficiency in programming languages such as Python, Java, or Scala. Additional Information: The candidate should have a minimum of 5 years of experience in the Databricks Unified Data Analytics Platform. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions. This position is based at our Chennai office. Qualifications BE
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Bengaluru
Work from Office
Project Role :Advanced Application Engineer Project Role Description :Utilize modular architectures, next-generation integration techniques and a cloud-first, mobile-first mindset to provide vision to Application Development Teams. Work with an Agile mindset to create value across projects of multiple scopes and scale. Must have skills :Data Engineering Good to have skills :Python (Programming Language), Apache Spark, Scala, Hadoop, Kafka Minimum 5 year(s) of experience is required Educational Qualification :15 years full time education Summary:As an Advanced Application Engineer, you will utilize modular architectures, next-generation integration techniques, and a cloud-first, mobile-first mindset to provide vision to Application Development Teams. You will work with an Agile mindset to create value across projects of multiple scopes and scale. Your typical day will involve collaborating with teams, making team decisions, engaging with multiple teams, and providing solutions to problems for your immediate team and across multiple teams. You will also contribute to key decisions and provide guidance to your team. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Contribute creatively to team discussions and brainstorming sessions Identify and implement process improvements to enhance team efficiency Mentor and guide junior professionals in the team Professional & Technical Skills: Must To Have Skills:Proficiency in Data Engineering, Python (Programming Language), Apache Spark, Scala Good To Have Skills:Experience with Python (Programming Language), Apache Spark, Scala Strong understanding of data engineering principles and best practices Experience in designing and implementing data pipelines and ETL processes Proficient in working with big data technologies such as Hadoop and Spark Familiarity with cloud platforms and services such as AWS or Azure Knowledge of database systems and SQL Experience with data warehousing and data modeling Additional Information: The candidate should have a minimum of 5 years of experience in Data Engineering This position is based at our Bengaluru office A 15 years full-time education is required Qualifications 15 years full time education
Posted 3 months ago
5 - 10 years
7 - 12 Lacs
Hyderabad
Work from Office
Project Role : Software Development Engineer Project Role Description : Analyze, design, code and test multiple components of application code across one or more clients. Perform maintenance, enhancements and/or development work. Must have skills : PySpark Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably computer science graduate 15 years of full time education Summary :As a Software Development Engineer, you will be responsible for analyzing, designing, coding, and testing multiple components of application code using PySpark. Your typical day will involve performing maintenance, enhancements, and/or development work for one or more clients in Hyderabad. Roles & Responsibilities: Design, develop, and maintain PySpark applications for one or more clients. Analyze and troubleshoot complex issues in PySpark applications and provide solutions. Collaborate with cross-functional teams to ensure timely delivery of high-quality software solutions. Participate in code reviews and ensure adherence to coding standards and best practices. Stay updated with the latest advancements in PySpark and related technologies. Professional & Technical Skills: Must To Have Skills:Strong experience in PySpark. Good To Have Skills:Experience in Big Data technologies such as Hadoop, Hive, and HBase. Experience in designing and developing distributed systems using PySpark. Strong understanding of data structures, algorithms, and software design principles. Experience in working with SQL and NoSQL databases. Experience in working with version control systems such as Git. Additional Information: The candidate should have a minimum of 5 years of experience in PySpark. The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful software solutions.-This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices. Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualifications A Engineering graduate preferably computer science graduate 15 years of full time education
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
HBase is a distributed, scalable, and NoSQL database that is commonly used in big data applications. As the demand for big data solutions continues to grow, so does the demand for professionals with HBase skills in India. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries and sectors.
These cities are known for their strong presence in the IT industry and are actively hiring professionals with HBase skills.
The salary range for HBase professionals in India can vary based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
In the HBase domain, a typical career progression may look like: - Junior HBase Developer - HBase Developer - Senior HBase Developer - HBase Architect - HBase Administrator - HBase Consultant - HBase Team Lead
In addition to HBase expertise, professionals in this field are often expected to have knowledge of: - Apache Hadoop - Apache Spark - Data Modeling - Java programming - Database design - Linux/Unix
As you prepare for HBase job opportunities in India, make sure to brush up on your technical skills, practice coding exercises, and be ready to showcase your expertise in interviews. With the right preparation and confidence, you can land a rewarding career in the exciting field of HBase. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2