Jobs
Interviews

546 Hbase Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 8.0 years

0 Lacs

haryana

On-site

You will be part of Maruti Suzuki's Analytics Centre of Excellence (ACE) CoE team as a Data Scientist. Your responsibilities will include designing and implementing workflows of Linear and Logistic Regression, Ensemble Models (Random Forest, Boosting) using R/Python. You should have demonstrable competency in Probability and Statistics, with the ability to use ideas of Data Distributions, Hypothesis Testing, and other Statistical Tests. Experience in handling outliers, denoising data, and managing the impact of pandemic-like situations will be crucial. Additionally, you will be expected to perform Exploratory Data Analysis (EDA) of raw data, conduct feature engineering where applicable, and showcase competency in Data Visualization using the Python/R Data Science Stack. Leveraging cloud platforms for training and deploying large-scale solutions, as well as training and evaluating ML models using various machine learning and deep learning algorithms, will be part of your role. You will also need to retrain and maintain model accuracy in deployment and package & deploy large-scale models on on-premise systems using multiple approaches including docker. Taking complete ownership of the assigned project, working in Agile environments, and being well-versed with project tracking tools like JIRA or equivalent will be expected. Your competencies should include knowledge of cloud platforms (AWS, Azure, and GCP), exposure to NoSQL databases (MongoDB, Cassandra, Cosmos DB, HBase), and forecasting experience in products like SAP, Oracle, Power BI, Qlik, etc. Proficiency in Excel (Power Pivot, Power Query, Macros, Charts), experience with large datasets and distributed computing (Hive/Hadoop/Spark), and transfer learning using state-of-the-art models in different spaces such as vision, NLP, and speech will be beneficial. Integration with external services and Cloud API, as well as working with data annotation approaches and tools for text, images, and videos, will also be part of your responsibilities. The ideal candidate should have a minimum of 2 years and a maximum of 8 years of work experience, along with a Bachelor of Technology (B.Tech) or equivalent educational qualification.,

Posted 1 day ago

Apply

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You will be working as an Informatica BDM professional at PibyThree Consulting Pvt Ltd. in Pune, Maharashtra. PibyThree is a global cloud consulting and services provider, focusing on Cloud Transformation, Cloud FinOps, IT Automation, Application Modernization, and Data & Analytics. The company's goal is to help businesses succeed by leveraging technology for automation and increased productivity. Your responsibilities will include: - Having a minimum of 4+ years of development and design experience in Informatica Big Data Management - Demonstrating excellent SQL skills - Working hands-on with HDFS, HiveQL, BDM Informatica, Spark, HBase, Impala, and other big data technologies - Designing and developing BDM mappings in Hive mode for large volumes of INSERT/UPDATE - Creating complex ETL mappings using various transformations such as Source Qualifier, Sorter, Aggregator, Expression, Joiner, Dynamic Lookup, Lookups, Filters, Sequence, Router, and Update Strategy - Ability to debug Informatica and utilize tools like Sqoop and Kafka This is a full-time position that requires you to work in-person during day shifts. The preferred education qualification is a Bachelor's degree, and the preferred experience includes a total of 4 years of work experience with 2 years specifically in Informatica BDM.,

Posted 1 day ago

Apply

5.0 - 12.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5-12 years of experience in Big Data & Data related technologies, with expertise in distributed computing principles. Your skills should include an expert level understanding of Apache Spark and hands-on programming with Python. Proficiency in Hadoop v2, Map Reduce, HDFS, and Sqoop is required. Experience in building stream-processing systems using technologies like Apache Storm or Spark-Streaming, as well as working with messaging systems such as Kafka or RabbitMQ, will be beneficial. A good understanding of Big Data querying tools like Hive and Impala, along with integration of data from multiple sources including RDBMS, ERP, and Files, is necessary. You should possess knowledge of SQL queries, joins, stored procedures, and relational schemas. Experience with NoSQL databases like HBase, Cassandra, and MongoDB, along with ETL techniques and frameworks, is expected. Performance tuning of Spark Jobs and familiarity with native Cloud data services like AWS or AZURE Databricks is essential. The role requires the ability to efficiently lead a team, design and implement Big data solutions, and work as a practitioner of AGILE methodology. This position falls under the category of Data Engineer and is suitable for individuals with expertise in ML/AI Engineers, Data Scientists, and Software Engineers.,

Posted 1 day ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

You will be responsible for delivering on critical business priorities while evolving the platform towards its strategic vision. This includes working closely with product managers, end users, and business analysts to understand business objectives and key success measures. You will partner with technology teams to design and architect solutions aligning to Barclays Architecture, Data, and Security standards, utilizing best-of-breed technology. Developing and testing resilient, scalable, and reusable services and APIs/data pipelines using the latest frameworks and libraries, while adhering to development standards is also part of your role. Implementing automated build, test, and deployment pipelines utilizing the latest DevOps tools will be a key responsibility, along with creating and managing relevant application documentation proactively. Participating in Scrum ceremonies and conducting sprint demos for stakeholders, as well as leading and managing the application of health and platform stability by reviewing technical debt, operational risks, and vulnerabilities are also part of your duties. Workassist is an online recruitment and employment solution providing a platform in India. They connect job seekers with relevant profiles to employers across different industries and experience levels. The e-recruitment technology allows them to quickly adapt to the new normal and assist job seekers in finding the best opportunities and employers in finding the best talent worldwide. They work with over 10,000+ recruiters from sectors such as Banking & Finance, Consulting, Sales & Marketing, HR, IT, Operations, and legal to help them recruit great emerging talents. If you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. Workassist is waiting for you!,

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

karnataka

On-site

As a senior-level Data Engineer with Machine Learning Analyst capabilities, you will play a crucial role in leading the architecture, development, and management of scalable data solutions. Your expertise in data architecture, big data pipeline development, and data quality enhancement will be key in processing large-scale datasets and supporting machine learning workflows. Your key responsibilities will include designing, developing, and maintaining end-to-end data pipelines for ingestion, transformation, and delivery across various business systems. You will ensure robust data quality, data lineage, data reconciliation, and governance practices. Additionally, you will architect and manage data warehouse and big data solutions supporting both structured and unstructured data. Optimizing and automating ETL/ELT processes for high-volume data environments will be essential, with a focus on processing 5B+ records. Collaborating with data scientists and analysts to support machine learning workflows and implementing streamlined DAAS workflows will also be part of your role. To succeed in this position, you must have at least 10 years of experience in data engineering, including data architecture and pipeline development. Your proven experience with Spark and Hadoop clusters for processing large-scale datasets, along with a strong understanding of ETL frameworks, data quality processes, and automation best practices, will be critical. Experience in data ingestion, lineage, governance, and reconciliation, as well as a solid understanding of data warehouse design principles and data modeling, are must-have skills. Expertise in automated data processing, especially for DAAS platforms, is essential. Desirable skills for this role include experience with Apache HBase, Apache NiFi, and other Big Data tools, knowledge of distributed computing principles and real-time data streaming, familiarity with machine learning pipelines and supporting data structures, and exposure to data cataloging and metadata management tools. Proficiency in Python, Scala, or Java for data engineering tasks is also beneficial. In addition to technical skills, soft skills such as a strong analytical and problem-solving mindset, excellent communication skills for collaboration across technical and business teams, and the ability to work independently, manage multiple priorities, and lead data initiatives are required. If you are excited about the opportunity to work as a Data Engineer with Machine Learning Analyst capabilities and possess the necessary skills and experience, we look forward to receiving your application.,

Posted 1 day ago

Apply

10.0 - 18.0 years

0 Lacs

indore, madhya pradesh

On-site

You should possess a BTech degree in computer science, engineering, or a related field of study, or have 12+ years of related work experience. Additionally, you should have at least 7 years of design and implementation experience with large-scale data-centric distributed applications. It is essential to have professional experience in architecting and operating cloud-based solutions, with a good understanding of core disciplines such as compute, networking, storage, security, and databases. A strong grasp of data engineering concepts like storage, governance, cataloging, data quality, and data modeling is required. Familiarity with various architecture patterns like data lake, data lake house, and data mesh is also important. You should have a good understanding of Data Warehousing concepts and hands-on experience with tools like Hive, Redshift, Snowflake, and Teradata. Experience in migrating or transforming legacy customer solutions to the cloud is highly valued. Moreover, experience working with services like AWS EMR, Glue, DMS, Kinesis, RDS, Redshift, Dynamo DB, Document DB, SNS, SQS, Lambda, EKS, and Data Zone is necessary. A thorough understanding of Big Data ecosystem technologies such as Hadoop, Spark, Hive, and HBase, along with other relevant tools and technologies, is expected. Knowledge in designing analytical solutions using AWS cognitive services like Textract, Comprehend, Rekognition, and Sagemaker is advantageous. You should also have experience with modern development workflows like git, continuous integration/continuous deployment pipelines, static code analysis tooling, and infrastructure-as-code. Proficiency in a programming or scripting language like Python, Java, or Scala is required. Possessing an AWS Professional/Specialty certification or relevant cloud expertise is a plus. In this role, you will be responsible for driving innovation within the Data Engineering domain by designing reusable and reliable accelerators, blueprints, and libraries. You should be capable of leading a technology team, fostering an innovative mindset, and enabling fast-paced deliveries. Adapting to new technologies, learning quickly, and managing high ambiguity are essential skills for this position. You will collaborate with business stakeholders, participate in various architectural, design, and status calls, and showcase good presentation skills when interacting with executives, IT Management, and developers. Furthermore, you will drive technology/software sales or pre-sales consulting discussions, ensure end-to-end ownership of tasks, and maintain high-quality software development with complete documentation and traceability. Fulfilling organizational responsibilities, sharing knowledge and experience with other teams/groups, conducting technical training sessions, and producing whitepapers, case studies, and blogs are also part of this role. The ideal candidate for this position should have 10 to 18 years of experience and be able to reference the job with the number 12895.,

Posted 2 days ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

You should have proficiency in Core Java and object-oriented design. Additionally, you should possess knowledge and experience in developing data-centric, web-based applications using various technologies including JSF, JSP, Java, JavaScript, Node.js, AJAX, HTML, CSS, Graph DB Titan/Janus, Elastic Search, and Tomcat/JBOSS. Experience in building REST APIs and Web Services, along with working knowledge of Agile software development, is required. You should also have experience with automated testing using JUnit and code versioning tools like SVN/Git. Understanding of design patterns and the ability to build easily configurable, deployable, and secure solutions is essential. As a part of your responsibilities, you will be planning product iterations, releasing iterations on schedule, writing reusable and efficient code, and implementing low-latency, high-availability, and high-performance applications. You will also be responsible for the implementation of security and data protection, providing analysis of problems, recommending solutions, and participating in system design, development, testing, debugging, documentation, and support. Furthermore, you should be able to translate complex functional and technical requirements into detailed designs. Desired skills for this role include 1-5 years of experience in Core Java, JSF, JSP, or Python, as well as experience in ETL, Big Data/Hadoop. Being highly tech-savvy with hands-on experience in building products from scratch is preferred. Familiarity with databases like Oracle, PostgreSQL, Cassandra, HBase, and Mongo DB is beneficial. You should be analytical, algorithmic, and logic-driven with in-depth knowledge of technology and development processes. Experience in product development in an agile environment and familiarity with API development using Node.js are advantageous. In terms of technical skills, you should be proficient in Core Java, JavaScript, Sigma.js, D3.js, Node.js, JSON, Ajax, CSS, HTML, Elastic Search, Graph DB Titan/Janus, Cassandra, HBase, Apache Tomcat, JBOSS, JUnit, and version control tools like SVN/Git. The educational qualification required for this position is a B.E/B.Tech/MCA/M.Sc./B.Sc degree, and the ideal candidate should have 3-5 years of relevant experience.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

The Engineer Intmd Analyst is an intermediate level position responsible for a variety of engineering activities including the design, acquisition, and development of hardware, software, and network infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks. Responsibilities: - Provide assistance with a product or product component development within the technology domain - Conduct product evaluations with vendors and recommend product customization for integration with systems - Assist with training activities, mentor junior team members, and ensure teams" adherence to all control and compliance initiatives - Assist with application prototyping and recommend solutions around implementation - Provide third-line support to identify the root cause of issues and react to systems and application outages or networking issues - Support projects and provide project status updates to project manager or Sr. Engineer - Partner with development teams to identify engineering requirements and assist with defining application/system requirements and processes - Create installation documentation, training materials, and deliver technical training to support the organization - Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing, and reporting control issues with transparency. Qualifications: - 5-8 years of relevant experience in an Engineering role - Experience working in Financial Services or a large complex and/or global environment - Involvement in DevOps activities (SRE/LSE Auto Deployment/Self Healing) and Application Support Tech Stack: Basic - Java/python, Unix, Oracle Essential Skills: - IT experience working in one of Hbase, HDFS, Kafka, Neo4J, Akka, Spark, Storm, and GemFire - IT Support experience working in Unix, Cloud & Windows environments - Experience supporting RDBMS DB like MongoD, ORACLE, Sybase, MS SQL & DB2 - Supported Applications deployed in Websphere, Weblogic, IIS, and Tomcat - Familiar with Autosys and setup - Understanding of client-server architecture (clustered and non-clustered) - Basic Networking knowledge (Load balancers, Network Protocols) - Working knowledge of Lookup Active Directory Protocol(LDAP) and Single Sign On concepts - Service Now expertise - Experience working in Multiple Application Support Model is preferred Other Essential Attributes: - Consistently demonstrates clear and concise written and verbal communication - Comprehensive knowledge of design metrics, analytics tools, benchmarking activities, and related reporting to identify best practices - Demonstrated analytic/diagnostic skills - Ability to work in a matrix environment and partner with virtual teams - Ability to work independently, prioritize, and take ownership of various parts of a project or initiative - Ability to work under pressure and manage tight deadlines or unexpected changes in expectations or requirements - Proven track record of operational process change and improvement Education: - Bachelors degree/University degree or equivalent experience Job Family Group: - Technology Job Family: - Systems & Engineering Time Type: - Full time Most Relevant Skills: Please see the requirements listed above.,

Posted 2 days ago

Apply

2.0 - 9.0 years

0 Lacs

karnataka

On-site

We are seeking a Data Architect / Sr. Data and Pr. Data Architects to join our team. In this role, you will be involved in a combination of hands-on contribution, customer engagement, and technical team management. As a Data Architect, your responsibilities will include designing, architecting, deploying, and maintaining solutions on the MS Azure platform using various Cloud & Big Data Technologies. You will be managing the full life-cycle of Data Lake / Big Data solutions, starting from requirement gathering and analysis to platform selection, architecture design, and deployment. It will be your responsibility to implement scalable solutions on the Cloud and collaborate with a team of business domain experts, data scientists, and application developers to develop Big Data solutions. Moreover, you will be expected to explore and learn new technologies for creative problem solving and mentor a team of Data Engineers. The ideal candidate should possess strong hands-on experience in implementing Data Lake with technologies such as Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB, and Purview. Additionally, experience with big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase, MongoDB, Neo4J, Elastic Search, Impala, Sqoop, etc., is required. Proficiency in programming and debugging skills in Python and Scala/Java is essential, with experience in building REST services considered beneficial. Candidates should also have experience in supporting BI and Data Science teams in consuming data in a secure and governed manner, along with a good understanding of using CI/CD with Git, Jenkins / Azure DevOps. Experience in setting up cloud-computing infrastructure solutions, hands-on experience/exposure to NoSQL Databases, and Data Modelling in Hive are all highly valued. Applicants should have a minimum of 9 years of technical experience, with at least 5 years on MS Azure and 2 years on Hadoop (CDH/HDP).,

Posted 2 days ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As an experienced professional, you will be responsible for implementing and supporting Hadoop platform-based applications to efficiently store, retrieve, and process terabytes of data. Your expertise will contribute to the seamless functionality of the system. To excel in this role, you should possess 4+ years of core Java or 2+ years of Python experience. Additionally, having at least 1+ years of working experience with the Hadoop stack including HDFS, MapReduce, HBase, and other related technologies is desired. A solid background of 1+ years in database management with MySQL or equivalent systems will be an added advantage. If you meet the qualifications and are enthusiastic about this opportunity, we encourage you to share your latest CV with us at data@scalein.com or reach out to us via our contact page. Your contribution will be pivotal in driving the success of our data management initiatives.,

Posted 2 days ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

The ideal candidate for this position should possess a Bachelor's/Master's/PhD degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or a related technical field. You should have a minimum of 7 years of relevant work experience in a similar role, particularly as a data scientist or Statistician, developing predictive analytics solutions for various business challenges. Your functional competencies should include advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. You must have a strong programming background and expertise in building models using languages such as SAS, Python, or R. Additionally, you should excel in storytelling and articulation, with the ability to translate analytical results into clear, concise, and persuasive insights for both technical and non-technical audiences. Experience in working with large datasets, both structured and unstructured, is crucial, along with the capability to comprehend business problems and devise optimal data strategies. Your responsibilities will encompass providing solutions for tasks like Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, and Price Optimization. It would be advantageous if you have experience working with big data platforms such as Hadoop, Hive, HBase, Spark, etc. The client you will be working with is a rapidly growing VC-backed on-demand startup that aims to revolutionize the food delivery industry. The team values talent, ambition, smartness, passion, versatility, focus, hyper-productivity, and creativity. The primary focus is on ensuring exceptional customer experience through superfast deliveries facilitated by a smartphone-equipped delivery fleet and custom-built routing algorithms. The company operates in eight cities across India and has secured substantial funding to support its expansion. If you meet the qualifications and are excited about the opportunity to contribute to this innovative venture, please share your updated profile at poc@mquestpro.com.,

Posted 2 days ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

You should have 5+ years of experience with expertise in Data Engineering. Your hands-on experience should include design and development of big data platforms. The ideal candidate will have deep understanding of modern data processing technology stacks such as Spark, HBase, Hive, and other Hadoop ecosystem technologies, with a focus on development using Scala. Additionally, you should possess deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices, is required. Understanding of the theory and application of Continuous Integration/Delivery is a plus. Familiarity with NoSQL technologies, including column family, graph, document, and key-value data storage technologies, is desirable. A passion for software craftsmanship is essential for this role. Experience in the Financial Industry would be beneficial.,

Posted 3 days ago

Apply

2.0 - 6.0 years

8 - 12 Lacs

Gurugram

Work from Office

At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career, Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express, Join Team Amex and let's lead the way together, From building next-generation apps and microservices in Kotlin to using AI to help protect our franchise and customers from fraud, you could be doing entrepreneurial work that brings our iconic, global brand into the future As a part of our tech team, we could work together to bring ground-breaking and diverse ideas to life that power our digital systems, services, products and platforms If you love to work with APIs, contribute to open source, or use the latest technologies, well support you with an open environment and learning culture Function Description: American Express is looking for energetic, successful and highly skilled Engineers to help shape our technology and product roadmap Our Software Engineers not only understand how technology works, but how that technology intersects with the people who count on it every day Today, innovative ideas, insight and new points of view are at the core of how we create a more powerful, personal and fulfilling experience for our customers and colleagues, with batch/real-time analytical solutions using ground-breaking technologies to deliver innovative solutions across multiple business units, This Engineering role is based in our Global Risk and Compliance Technology organization and will have a keen focus on platform modernization, bringing to life the latest technology stacks to support the ongoing needs of the business as well as compliance against global regulatory requirements Qualifications: Support the Compliance and Operations Risk data delivery team in India to lead and assist in the design and actual development of applications, Responsible for specific functional areas within the team, this involves project management and taking business specifications, The individual should be able to independently run projects/tasks delegated to them, Technology Skills: Bachelor degree in Engineering or Computer Science or equivalent 2 to 5 years experience is required GCP professional certification Data Engineer Expert in Google BigQuery tool for data warehousing needs, Experience on Big Data (Spark Core and Hive) preferred Familiar with GCP offerings, experience building data pipelines on GCP a plus Hadoop Architecture, having knowledge on Hadoop, Map Reduce, Hbase, UNIX shell scripting experience is good to have Creative problem solving (Innovative) We back you with benefits that support your holistic well-being so you can be and deliver your best This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law, Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations, Show

Posted 3 days ago

Apply

2.0 - 7.0 years

30 - 35 Lacs

Bengaluru

Work from Office

Senior Data Engineer Apply now Date: 27 Jul 2025 Location: Bangalore, IN Company: kmartaustr A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Job Description: 5-7 Yrs of expereience in Data Engineer 3+ yrs in AWS service like IAM, API gateway, EC2,S3 2+yrs expereince in creating and deploying containers on kubernestes 2+yrs expereince with CI-CD pipelines like Jrnkins, Github 2+yrs expereince with snowflake data warehousing, 5-7 yrs with ETL/ELT paradign 5-7 yrs in Big data technologies like Spark, Kafka Strong Expereince skills in Python, Java or scala A place you can belong: We celebrate the rich diversity of the communities in which we operate and are committed to creating inclusive and safe environments where all our team members can contribute and succeed. We believe that all team members should feel valued, respected, and safe irrespective of your gender, ethnicity, indigeneity, religious beliefs, education, age, disability, family responsibilities, sexual orientation and gender identity and we encourage applications from all candidates. Apply now Find similar jobs:

Posted 3 days ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional and technical capabilities and product knowledge. The financial services practice at EY offers integrated advisory services to financial institutions and other capital markets participants. Within EY's Advisory Practice, the Data and Analytics team solves big, complex issues and capitalizes on opportunities to deliver better working outcomes that help expand and safeguard businesses, now and in the future. This way, we help create a compelling business case for embedding the right analytical practice at the heart of clients" decision-making. We're looking for Senior and Manager Big Data Experts with expertise in the Financial Services domain and hands-on experience with the Big Data ecosystem. Expertise in Data engineering, including design and development of big data platforms. Deep understanding of modern data processing technology stacks such as Spark, HBase, and other Hadoop ecosystem technologies. Development using SCALA is a plus. Deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices. Understanding of the theory and application of Continuous Integration/Delivery. Experience with NoSQL technologies and a passion for software craftsmanship. Experience in the Financial industry is a plus. Nice to have skills include understanding and familiarity with all Hadoop Ecosystem components, Hadoop Administrative Fundamentals, experience working with NoSQL in data stores like HBase, Cassandra, MongoDB, HDFS, Hive, Impala, schedulers like Airflow, Nifi, experience in Hadoop clustering, and Auto scaling. Developing standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Defining and developing client-specific best practices around data management within a Hadoop environment on Azure cloud. To qualify for the role, you must have a BE/BTech/MCA/MBA degree, a minimum of 3 years hands-on experience in one or more relevant areas, and a total of 6-10 years of industry experience. Ideally, you'll also have experience in Banking and Capital Markets domains. Skills and attributes for success include using an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates, strong communication, presentation and team building skills, experience in producing high-quality reports, papers, and presentations, and experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment, an opportunity to be a part of a market-leading, multi-disciplinary team of 1400+ professionals, in the only integrated global transaction business worldwide, and opportunities to work with EY Advisory practices globally with leading businesses across a range of industries. Working at EY offers inspiring and meaningful projects, education and coaching alongside practical experience for personal development, support, coaching, and feedback from engaging colleagues, opportunities to develop new skills and progress your career, freedom and flexibility to handle your role in a way that's right for you. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,

Posted 4 days ago

Apply

2.0 - 6.0 years

0 Lacs

kochi, kerala

On-site

You will be responsible for big data development and support for production deployed applications, analyzing business and functional requirements for completeness, and developing code with minimum supervision. Working collaboratively with team members, you will ensure accurate and timely communication and delivery of assigned tasks to guarantee the end-products" performance upon release to production. Handling software defects or issues within production timelines and SLA is a key aspect of the role. Your responsibilities will include authoring test cases within a defined testing strategy, participating in test strategy development for Configuration and Custom reports, creating test data, assisting in code merge peer reviews, reporting status and progress to stakeholders, and providing risk assessment throughout development cycles. You should have a strong understanding of system and big data strategies/approaches adopted by IQVIA, stay updated on software applications development industry knowledge, and be open to production support roles within the project. To excel in this role, you should have 5-8 years of overall experience, with at least 2-3 years in Big Data, proficiency in Big Data Technologies such as HDFS, Hive, Pig, Sqoop, HBase, and Oozie, strong experience in SQL Queries and Airflow, familiarity with PSql, CI-CD, Jenkins, and UNIX commands, excellent communication skills, comprehensive skills, good confidence level, proven analytical, logical, and problem-solving techniques. Experience in Spark Application Development, ETL, and ELT tools is preferred. Possessing fine-tuned analytical skills, attention to detail, and the ability to work effectively with colleagues from diverse backgrounds is essential. The minimum educational requirement for this position is a Bachelor's Degree in Information Technology or a related field, along with 5-8 years of development experience or an equivalent combination of education, training, and experience. IQVIA is a leading global provider of clinical research services, commercial insights, and healthcare intelligence, facilitating the acceleration of innovative medical treatments" development and commercialization to enhance patient outcomes and population health worldwide. To learn more, visit https://jobs.iqvia.com.,

Posted 4 days ago

Apply

4.0 - 6.0 years

20 - 25 Lacs

Pune

Work from Office

Job Description: Job Title: JAVA Developer, Associate Location: Pune, India Role Description We are looking for a Java Developer to produce scalable software solutions on distributed systems like Hadoop using Spark Framework. youll be part of a cross-functional team that s responsible for the full software development life cycle, from conception to deployment. As a Developer, one should be comfortable around back-end coding, development frameworks, third party libraries and Spark APIs required for application development on distributed platform like Hadoop. Candidate should also be a team player with a knack for visual design and utility. Familiarity with Agile methodologies, it will be an added advantage. What we ll offer you As part of our flexible scheme, here are just some of the benefits that you ll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under child care assistance benefit (gender neutral) Flexible working arrangements Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Work with development teams and product managers to ideate software solutions Design client-side and server-side architecture Build features and applications which are capable of running on distributed platforms and/or cloud. Develop and manage well-functioning applications which support micro-services architecture. Test software to ensure responsiveness and efficiency Troubleshoot, debug and upgrade software Create security and data protection settings Write technical & design documentation Write effective APIs (REST & SOAP) Your skills and experience Proven experience as a Java Developer or similar role as an individual contributor or development lead Familiarity with common stacks Strong Knowledge and working experience of Core Java, Spring Boot, Rest APIs, Spark API etc. is a must Knowledge of Junit, Mockito, or any other framework(s) is a must. Experiences with databases (e. g. Oracle, PostgreSQL) Familiar with developing on distributed application platform like Hadoop with Spark Excellent communication and teamwork skills Organizational skills An analytical mind Degree in Computer Science, Statistics or relevant field Experience working in Agile Good to have: Knowledge of JavaScript frameworks (e. g. Angular, React, and Node. js) and UI/UX design Knowledge on Python would be a big plus. Knowledge on NoSQL databases like HBASE, MONGO. Experience: 4-6 years of prior working experience in a global banking / insurance/financial organization. How we ll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs https://www. db. com/company/company. htm Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 4 days ago

Apply

4.0 - 7.0 years

6 - 9 Lacs

Bengaluru

Work from Office

Job Summary Synechron is seeking a motivated and experienced Big Data Engineer to design, develop, and implement scalable big data solutions. The ideal candidate will possess strong hands-on experience with Hadoop, Spark, and NoSQL databases, enabling the organization to ingest, process, and analyze vast data sets efficiently. This role contributes directly to the organizations data-driven initiatives by creating reliable data pipelines and collaborating with cross-functional teams to deliver insights that support strategic decision-making and operational excellence. Purpose: To build and maintain optimized big data architectures that support real-time and batch data processing, enabling analytics, reporting, and machine learning efforts. Value: By ensuring high-performance and scalable data platforms, this role accelerates data insights, enhances business agility, and ensures data integrity and security. Software Requirements Required Skills: Deep expertise in Hadoop ecosystem components including Hadoop Distributed File System (HDFS), Spark (batch and streaming), and related tools. Practical experience with NoSQL databases such as Cassandra, MongoDB, and HBase. Experience with data ingestion tools like Spark Streaming and Apache Flume. Strong programming skills in Java, Scala, or Python. Familiarity with DevOps tools such as Git, Jenkins, Docker, and container orchestration with OpenShift or Kubernetes. Working knowledge of cloud platforms like AWS and Azure for deploying and managing data solutions. Preferred Skills: Knowledge of additional data ingestion and processing tools. Experience with data cataloging or governance frameworks. Overall Responsibilities Design, develop, and optimize large-scale data pipelines and data lakes using Spark, Hadoop, and related tools. Implement data ingestion, transformation, and storage solutions to meet business and analytic needs. Collaborate with data scientists, analysts, and cross-functional teams to translate requirements into technical architectures. Monitor daily data operations, troubleshoot issues, and improve system performance and scalability. Automate deployment and maintenance workflows utilizing DevOps practices and tools. Ensure data security, privacy, and compliance standards are upheld across all systems. Stay updated with emerging big data technologies to incorporate innovative solutions. Strategic objectives: Enable scalable, reliable, and efficient data processing platforms to support analytics and AI initiatives. Improve data quality, accessibility, and timeliness for organizational decision-making. Drive automation and continuous improvement in data infrastructure. Performance outcomes: High reliability and performance of data pipelines with minimal downtime. Increased data ingestion and processing efficiency. Strong collaboration across teams leading to successful project outcomes. Technical Skills (By Category) Programming Languages: Essential: Java, Scala, or Python for developing data pipelines and processing scripts. Preferred: Knowledge of additional languages such as R or SQL scripting for data manipulation. Databases & Data Management: Experience with Hadoop HDFS, HBase, Cassandra, MongoDB, and similar NoSQL data stores. Familiarity with data modeling, ETL workflows, and data warehousing strategies. Cloud Technologies: Practical experience deploying and managing big data solutions on AWS (e.g., EMR, S3) and Azure. Knowledge of cloud security practices and resource management. Frameworks & Libraries: Extensive use of Hadoop, Spark (structured and streaming), and related libraries. Familiarity with serialization formats like Parquet, Avro, or ORC. Development Tools & Methodologies: Proficiency with GIT, Jenkins, Docker, and OpenShift/Kubernetes for versioning, CI/CD, and containerization. Experience working within Agile/Scrum environments. Security & Data Governance: Comprehension of data security protocols, access controls, and compliance regulations. Experience Requirements 4 to 7 years of hands-on experience in Big Data engineering or related roles. Demonstrable experience designing and maintaining large-scale data pipelines, data lakes, and data warehouses. Proven aptitude for using Spark, Hadoop, and NoSQL databases effectively in production environments. Prior experience in financial services, healthcare, retail, or telecommunications sectors is a plus. Ability to lead technical initiatives and collaborate with multidisciplinary teams. Day-to-Day Activities Develop and optimize data ingestion, processing, and storage workflows. Collaborate with data scientists and analysts to architect solutions aligned with business needs. Build, test, and deploy scalable data pipelines ensuring high performance and reliability. Monitor system health, diagnose issues, and implement improvements for data systems. Conduct code reviews and knowledge sharing sessions within the team. Participate in sprint planning, daily stand-ups, and project reviews to ensure timely delivery. Stay current with evolving big data tools and best practices. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. Relevant certifications in big data technologies or cloud platforms are a plus. Demonstrable experience leading end-to-end data pipeline solutions. Professional Competencies Strong analytical, troubleshooting, and problem-solving skills. Effective communicator with the ability to explain complex concepts to diverse audiences. Ability to work collaboratively in a team-oriented environment. Adaptability to emerging technologies and shifting priorities. High level of organization and attention to detail. Drive for continuous learning and process improvement.

Posted 4 days ago

Apply

10.0 - 15.0 years

11 - 16 Lacs

Bengaluru

Work from Office

Role Overview: Skyhigh Security is seeking a Principal Data Engineer to design and build scalable Big Data solutions. You'll leverage your deep expertise in Java and Big Data architecture to process massive datasets and shape our security offerings. Extensive experience with distributed systems, cloud platforms, and a passion for data quality, apply now to join our innovative team and make a global impact in cybersecurity! Our Engineering team is driving the future of cloud securitydeveloping one of the worlds largest, most resilient cloud-native data platforms. At Skyhigh Security, were enabling enterprises to protect their data with deep intelligence and dynamic enforcement across hybrid and multi-cloud environments. As we continue to grow, were looking for a Principal Data Engineer to help us scale our platform, integrate advanced AI/ML workflows, and lead the evolution of our secure data infrastructure. Responsibilities: As a Principal Data Engineer, you will be responsible for: Leading the design and implementation of high-scale, cloud-native data pipelines for real-time and batch workloads. Collaborating with product managers, architects, and backend teams to translate business needs into secure and scalable data solutions. Integrating big data frameworks (like Spark, Kafka, Flink) with cloud-native services (AWS/GCP/Azure) to support security analytics use cases. Driving CI/CD best practices, infrastructure automation, and performance tuning across distributed environments. Evaluating and piloting the use of AI/LLM technologies in data pipelines (e.g., anomaly detection, metadata enrichment, automation). Evaluate and integrate LLM-based automation and AI-enhanced observability into engineering workflows. Ensure data security and privacy compliance. Mentoring engineers, ensuring high engineering standards, and promoting technical excellence across teams. What Were Looking For (Minimum Qualifications) 10+ years of experience in big data architecture and engineering, including deep proficiency with the AWS cloud platform. Expertise in distributed systems and frameworks such as Apache Spark, Scala, Kafka, Flink, and Elasticsearch, with experience building production-grade data pipelines. Strong programming skills in Java for building scalable data applications. Hands-on experience with ETL tools and orchestration systems. Solid understanding of data modeling across both relational (PostgreSQL, MySQL) and NoSQL (HBase) databases and performance tuning. What Will Make You Stand Out (Preferred Qualifications) Experience integrating AI/ML or LLM frameworks (e.g., LangChain, LlamaIndex) into data workflows. Experience implementing CI/CD pipelines with Kubernetes, Docker, and Terraform. Knowledge of modern data warehousing (e.g., BigQuery, Snowflake) and data governance principles (GDPR, HIPAA). Strong ability to translate business goals into technical architecture and mentor teams through delivery. Familiarity with visualization tools (Tableau, Power BI) to communicate data insights, even if not a primary responsibility. #LI-MS1

Posted 4 days ago

Apply

8.0 - 10.0 years

30 - 32 Lacs

Hyderabad

Work from Office

Candidate Specifications: Candidate should have 9+ years of experience. Candidates should have 9+ years of experience in Python and Pyspark Candidate should have strong experience in AWS and PLSQL. Candidates should be strong in Data management with data governance and data streaming along with data lakes and data-warehouse Candidates should also have exposure in Team handling and stakeholder management skills. Candidate should have excellent in written and verbal communication skills. Contact Person: Sheena Rakesh

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a FullStack (Java + Angular+AWS) Tech Lead, you will be part of a dynamic team that thrives on challenges and aims to make a significant impact on the business world. You will work in a fast-paced, quality-oriented environment that encourages innovation and growth. Your role will involve utilizing your extensive skills and experience in various technologies to drive the development and maintenance of cutting-edge applications. Your responsibilities will include investigating and understanding business requirements, addressing issues, expanding current functionality, and implementing new features. You will also be involved in task scoping, estimation, prioritization, and working closely with business analysts and subject matter experts to devise creative solutions. Collaboration with testers to create test plans, participating in development discussions, and providing technical guidance to developers will be essential aspects of your role. Mentoring the development team for challenging tasks, conducting proof of concepts, and early risk assessments will also be part of your responsibilities. Additionally, your expertise in architectural design, hands-on development experience in Java and related technologies, familiarity with Spring framework, building Micro Services, RESTful web Services, UI basics, Typescript, Angular, Message Queues, Relational and NoSQL databases, DevOps tools, AWS cloud platform, code quality maintenance, Agile processes, and performance improvements will be key to your success in this role. Your positive, flexible, and can-do attitude, coupled with your problem-solving skills, planning and execution capabilities, impactful communication, and understanding of the application development life cycle, will be crucial in handling production outage situations and delivering quick issue fixes. Your ability to adapt quickly, learn new technologies, and enhance existing code bases based on evolving business requirements will play a significant role in driving the success of the team and the organization as a whole.,

Posted 6 days ago

Apply

2.0 - 4.0 years

25 - 30 Lacs

Pune

Work from Office

Rapid7 is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.

Posted 6 days ago

Apply

6.0 - 11.0 years

22 - 27 Lacs

Pune, Bengaluru

Work from Office

Build ETL jobs using Fivetran and dbt for our internal projects and for customers that use various platforms like Azure, Salesforce and AWS technologies Build out data lineage artifacts to ensure all current and future systems are properly documented Required Candidate profile exp with a strong proficiency with SQL query/development skills Develop ETL routines that manipulate & transfer large volumes of data and perform quality checks Exp in healthcare industry with PHI/PII

Posted 6 days ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru

Work from Office

Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql

Posted 1 week ago

Apply

2.0 - 7.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Products Technology Resources Mechatronics BIGDATA Scientist Developer Mechatronics BIGDATA Scientist Developer Department : Research Development Department Reporting to : General Manager Responsibility Key responsibilities Selecting features, building optimizing classifiers using machine learning techniques. Data mining using state-of-the-art methods. Extending company s data with third party sources of information when needed. Enhancing data collection procedures to include information that is relevant for building analytic systems. Processing, cleansing, and verifying the integrity of data used for analysis. Performing ad-hoc analysis and presenting results in a clear manner. Creating automated anomaly detection systems and constant tracking of its performance. Behavioral Competencies Data-oriented person. Skills Excellent understanding of machine learning techniques and algorithms, such as k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience with common data science toolkits, such as R, Weka, NumPy, MatLab, etc. Depending on our specific project requirements . Excellence in at least one of these NumPy or R is highly desirable. Experience with data visualisation tools, such as D3.js, GGplot, etc. Proficiency in using query languages such as SQL, Hive, Pig, NiFi {{actual list depends on what you are currently using in your company}}. Experience with NoSQL DB s such as InFleuxDB, MongoDB, Cassandra, HBase. Good applied statistics skills, such as distributions, statistical testing, regression, etc. Good scripting programming skills PHP, Slim, SQL, Laravel. Hadoop, HDFS, NiFi. Other Professional Training, if any Any certification related to BigData. Share this position Facebook LinkedIn WhatsApp Are you the right fit Essential qualification : MTech./ MS/Mechatronics / Computer or equivalent Experience : 2 years of proficient experience working and developing SDKs for any platform Location : Bengaluru, Karnataka Didnt you find your position Let us know more about your capabilities and if you re a relevant candidate, we will get back to you swiftly. Are you the right fit Apply for this position Thank you for considering BFW as your future workplace! We invest a significant amount of energy and focus into creating an excellent workspace, offering numerous opportunities and promising careers. If you re interested, please feel free to provide us with more information. Please note that only relevant candidates will be contacted. First name Last name Email ID Confirm email ID Current address Phone number Tell us more about your previous experience (title, company, how long did you work there, your responsibilities...) Experience details Education (the highest) Major Degree Description Studies (from - to) Have you ever worked for BFW Current salary Expected salary Upload your CV and other useful documents Apply now Join our team Make a difference Our mission is to contribute to the advancement of humanity through technology. But we can t achieve that without you. Global clients Here s your opportunity to work for some of the leading global brands in the world, such as Toyota and Bosch. Bharat Fritz Werner Ltd. (BFW) is a pioneering name in machine tools, manufacturing solutions, and technological innovation. Follow us on PRODUCTS RESOURCES Contact BFW UPDATES Subscribe to get notified about latest events, new products, industry insights and other updates from BFW, directly to your inbox.

Posted 1 week ago

Apply

Exploring HBase Jobs in India

HBase is a distributed, scalable, and NoSQL database that is commonly used in big data applications. As the demand for big data solutions continues to grow, so does the demand for professionals with HBase skills in India. Job seekers looking to explore opportunities in this field can find a variety of roles across different industries and sectors.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi-NCR

These cities are known for their strong presence in the IT industry and are actively hiring professionals with HBase skills.

Average Salary Range

The salary range for HBase professionals in India can vary based on experience and location. Entry-level positions may start at around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.

Career Path

In the HBase domain, a typical career progression may look like: - Junior HBase Developer - HBase Developer - Senior HBase Developer - HBase Architect - HBase Administrator - HBase Consultant - HBase Team Lead

Related Skills

In addition to HBase expertise, professionals in this field are often expected to have knowledge of: - Apache Hadoop - Apache Spark - Data Modeling - Java programming - Database design - Linux/Unix

Interview Questions

  • What is HBase and how does it differ from traditional RDBMS? (basic)
  • Explain the architecture of HBase. (medium)
  • How does data replication work in HBase? (medium)
  • What is the role of HMaster in HBase? (basic)
  • How can you improve the performance of HBase? (medium)
  • What are the different types of filters in HBase? (medium)
  • Explain the concept of HBase coprocessors. (advanced)
  • How does compaction work in HBase? (medium)
  • What is the purpose of the WAL in HBase? (basic)
  • Can you explain the difference between HBase and Cassandra? (medium)
  • What is the role of ZooKeeper in HBase? (basic)
  • How does data retrieval work in HBase? (medium)
  • What is a region server in HBase? (basic)
  • Explain the concept of bloom filters in HBase. (medium)
  • How does HBase ensure data consistency? (medium)
  • What is the significance of column families in HBase? (basic)
  • How do you handle schema changes in HBase? (medium)
  • Explain the concept of cell-level security in HBase. (advanced)
  • What are the different modes of data loading in HBase? (medium)
  • How does HBase handle data storage internally? (medium)
  • What is the purpose of the HFile in HBase? (basic)
  • How can you monitor the performance of HBase? (medium)
  • What is the role of the MemStore in HBase? (basic)
  • How does HBase handle data distribution and load balancing? (medium)
  • Explain the process of data deletion in HBase. (medium)

Closing Remark

As you prepare for HBase job opportunities in India, make sure to brush up on your technical skills, practice coding exercises, and be ready to showcase your expertise in interviews. With the right preparation and confidence, you can land a rewarding career in the exciting field of HBase. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies