Jobs
Interviews

552 Hbase Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

1.0 - 4.0 years

1 - 4 Lacs

Hubli

Work from Office

We are looking for a highly skilled and experienced Legal Officer to join our team at Equitas Small Finance Bank. Roles and Responsibility Manage and oversee legal matters related to mortgages and other financial products. Provide legal support and guidance to internal stakeholders on various banking operations. Conduct legal research and analysis to ensure compliance with regulatory requirements. Develop and implement effective legal strategies to mitigate risks and protect the bank's interests. Collaborate with cross-functional teams to achieve business objectives. Ensure all legal documents and contracts are properly executed and stored. Job Requirements Strong knowledge of legal principles and practices applicable to the BFSI industry. Experience working with SBL or similar institutions is preferred. Excellent analytical and problem-solving skills with attention to detail. Ability to work independently and as part of a team. Strong communication and interpersonal skills. Familiarity with mortgage laws and regulations is essential.

Posted 1 month ago

Apply

6.0 - 7.0 years

14 - 17 Lacs

Hyderabad

Work from Office

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Tota 6 - 7+ years of experience in Data Management (DW, DL, Data Patform, Lakehouse) and Data Engineering skis Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and Azure Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Coud Data Patform or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / Azure eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa; Minimum 3 years of experience on Coud Data Patforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in Azure and Data Bricks or Coudera Spark Certified deveopers

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Navi Mumbai

Work from Office

As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 17 Lacs

Mumbai

Work from Office

As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations. Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa

Posted 1 month ago

Apply

4.0 - 9.0 years

12 - 16 Lacs

Kochi

Work from Office

As Data Engineer, you wi deveop, maintain, evauate and test big data soutions. You wi be invoved in the deveopment of data soutions using Spark Framework with Python or Scaa on Hadoop and AWS Coud Data Patform Responsibiities: Experienced in buiding data pipeines to Ingest, process, and transform data from fies, streams and databases. Process the data with Spark, Python, PySpark, Scaa, and Hive, Hbase or other NoSQL databases on Coud Data Patforms (AWS) or HDFS Experienced in deveop efficient software code for mutipe use cases everaging Spark Framework / using Python or Scaa and Big Data technoogies for various use cases buit on the patform Experience in deveoping streaming pipeines Experience to work with Hadoop / AWS eco system components to impement scaabe soutions to meet the ever-increasing data voumes, using big data/coud technoogies Apache Spark, Kafka, any Coud computing etc Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Minimum 4+ years of experience in Big Data technoogies with extensive data engineering experience in Spark / Python or Scaa ; Minimum 3 years of experience on Coud Data Patforms on AWS; Experience in AWS EMR / AWS Gue / DataBricks, AWS RedShift, DynamoDB Good to exceent SQL skis Exposure to streaming soutions and message brokers ike Kafka technoogies Preferred technica and professiona experience Certification in AWS and Data Bricks or Coudera Spark Certified deveopers

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 14 Lacs

Chennai

Work from Office

Deveoper eads the coud appication deveopment/depoyment. A deveoper responsibiity is to ead the execution of a project by working with a senior eve resource on assigned deveopment/depoyment activities and design, buid, and maintain coud environments focusing on uptime, access, contro, and network security using automation and configuration management toos Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTfu APIs, exceent understanding of OOP, Design Patterns. Strong knowedge of ORM toos ike Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowedge of micro-service ogging, monitoring, debugging and testing, In-depth knowedge of reationa databases (e.g., MySQL) Experience in container patforms such as Docker and Kubernetes, experience in messaging patforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Deveopment Famiiar with Ant, Maven or other buid automation framework, good knowedge of base UNIX commands Preferred technica and professiona experience Experience in Concurrent design and muti-threading Primary Skis: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 25 Lacs

Pune, Chennai, Bengaluru

Work from Office

Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data processing pipelines using Scala/Spark. Collaborate with cross-functional teams to gather requirements and deliver high-quality solutions. Troubleshoot complex issues related to Hive queries, Spark jobs, and other big data technologies. Ensure scalability, performance, and reliability of big data systems on Cloudera/Hadoop ecosystem. Stay up-to-date with industry trends and best practices in big data development.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 9 Lacs

Mumbai

Work from Office

-SQL, Any NO-SQL database like MongoDB, ElasticSearch -Data Visualization tools like Kibana -Expertise in visualizing big datasets, monitoring data, familiar with implementing anomaly detection. -Proficiency with deep learning frameworks. -Proficiency with Python and basic libraries for machine learning -Familiarity with Linux. -Ability to select hardware to run an ML model with the required latency -Any one cloud technology - AWS, Google Cloud or similar technology. -Python using machine learning libraries (Pytorch, Tensorflow), Big Data, Hadoop, HBase, Spark, etcDeep understanding of data structures, algorithms, and excellent problem-solving skills. Andheri East-Mumbai -Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress. -Develop scalable infrastructure that automates training and deployment of ML models -Managing available resources such as hardware, data, and personnel so that deadlines are met. -Data visualizations and data comprehension. -Brainstorm and Design various POCs using ML/DL/NLP solutions for new or existing enterprise problems. -Work with and mentor fellow Data and Software Engineers to build other parts of the infrastructure. -Effectively communicating your needs and understanding product challenges. -Building core of Artificial Intelligence and AI services such as Decision Support, Vision, Speech, Text, NLP, NLU, and others. -Leverage Cloud technology - AWS or similar technology. -Experiment with ML models in Python using machine learning libraries, Big Data, Hadoop, HBase, Spark, etc. Any. About Propellum Propellum is a leading job automation solution that has enabled job boards across the world to scale limitlessly and distinguish themselves from the competition. Empowering leading job boards since 1998, our rock-solid technology backed by super-efficient customer service and team of domain experts has been one of the defining reasons for our success.

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Pune

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands, Experience in Concurrent design and multi-threading Preferred technical and professional experience None

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 month ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

Pune

Work from Office

As a BigData Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc) . Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions

Posted 1 month ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Chennai

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python

Posted 1 month ago

Apply

3.0 - 7.0 years

11 - 15 Lacs

Hyderabad

Work from Office

The Manager, Software Development Engineering leads a team of technical experts in successfully executing technology projects and solutions that align with the strategy and have broad business impact. The Manager, Software Development Engineering will work closely with development teams to identify and understand key features and their underlying functionality while also partnering closely with Product Management and UX Design. They may exercise influence and govern overall end-to-end software development life cycle related activities including management of support and maintenance releases, minor functional releases, and major projects. The Manager, Software Development Engineering will lead & provide technical guidance for process improvement programs while leveraging engineering best practices. In this people leadership role, Managers will recruit, train, motivate, coach, grow and develop Software Development Engineer team members at a variety of levels through their technical expertise and providing continuous feedback to ensure employee expectations, customer needs and product demands are met. About the Role: Lead and manage a team of engineers, providing mentorship and fostering a collaborative environment. Design, implement, and maintain scalable data pipelines and systems to support business analytics and data science initiatives. Collaborate with cross-functional teams to understand data requirements and ensure data solutions align with business goals. Ensure data quality, integrity, and security across all data processes and systems. Drive the adoption of best practices in data engineering, including coding standards, testing, and automation. Evaluate and integrate new technologies and tools to enhance data processing and analytics capabilities. Prepare and present reports on engineering activities, metrics, and project progress to stakeholders. About You: Proficiency in programming languages such as Python, Java, or Scala. Data Engineering with API & any programming language. Strong understanding of APIs and possess forward-looking knowledge of AI/ML tools or models and need to have some knowledge on software architecture. Experience with cloud platforms (e.g., AWS,Google Cloud) and big data technologies (e.g., Hadoop, Spark). Experience with Rest/Odata API's Strong problem-solving skills and the ability to work in a fast-paced environment. Excellent communication and interpersonal skills. Experience with data warehousing solutions such as BigQuery or snowflakes Familiarity with data visualization tools and techniques. Understanding of machine learning concepts and frameworks. #LI-AD2 Whats in it For You Hybrid Work Model Weve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrows challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our valuesObsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound excitingJoin us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here. Learn more on how to protect yourself from fraudulent job postings here. More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 1 month ago

Apply

1.0 - 3.0 years

8 - 16 Lacs

Noida

Work from Office

-Proficient in Java, Spark, Kafka for real-time processing -Skilled in HBase for NoSQL on on-prem clusters -Strong in data modeling for scalable NoSQL systems -Built ETL pipelines using Spark for transformation -Knowledge of Hadoop cluster management Required Candidate profile -Bachelor’s in CS or related field -Familiar with version control systems, particularly Git -Knowledge of AWS, Azure, or GCP -Understanding of distributed databases, especially HBase

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Kochi

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Required education Bachelor's Degree Preferred education Non-Degree Program Required technical and professional expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python

Posted 1 month ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Pune

Work from Office

The Data Engineer supports, develops, and maintains a data and analytics platform to efficiently process, store, and make data available to analysts and other consumers. This role collaborates with Business and IT teams to understand requirements and best leverage technologies for agile data delivery at scale. Note:- Even though the role is categorized as Remote, it will follow a hybrid work model. Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Develop and operate large-scale data storage and processing solutions using cloud-based platforms (e.g., Data Lakes, Hadoop, HBase, Cassandra, MongoDB, DynamoDB). Ensure data quality and integrity through continuous monitoring and troubleshooting. Implement data governance processes, managing metadata, access, and data retention. Develop scalable, efficient, and quality data pipelines with monitoring and alert mechanisms. Design and implement physical data models and storage architectures based on best practices. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Utilize agile development technologies such as DevOps, Scrum, and Kanban for continuous improvement in data-driven applications. External Qualifications and Competencies Qualifications, Skills, and Experience: Must-Have: 2-3 years of experience in data engineering with expertise in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Strong understanding of SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Experience in Unit Testing of ETL pipelines. Expertise in creating ETL pipelines integrating ML models. Knowledge of Big Data storage strategies (optimization and performance). Strong problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Exposure to Agile software development methodologies. Quick learner with adaptability to new technologies. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open-source technologies. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement in cloud environments. Experience in building analytical solutions. Exposure to IoT technology. Competencies: System Requirements Engineering: Translates stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively with others. Communicates Effectively: Develops and delivers clear communications for various audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes timely and informed decisions to drive progress. Data Extraction: Performs ETL activities from various sources using appropriate tools and technologies. Programming: Writes and tests computer code using industry standards, tools, and automation. Quality Assurance Metrics: Applies measurement science to assess solution effectiveness. Solution Documentation: Documents and communicates solutions to enable knowledge transfer. Solution Validation Testing: Ensures configuration changes meet design and customer requirements. Data Quality: Identifies and corrects data flaws to support governance and decision-making. Problem Solving: Uses systematic analysis to identify and resolve issues effectively. Values Differences: Recognizes and values diverse perspectives and cultures. Additional Responsibilities Unique to this Position Education, Licenses, and Certifications: College, university, or equivalent degree in a relevant technical discipline, or equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Work Schedule: Work primarily with stakeholders in the US, requiring a 2-3 hour overlap during EST hours as needed.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Pune

Work from Office

Please note even though the GPP mentions Remote, this is a Hybrid role. Key Responsibilities: Implement and automate deployment of distributed systems for ingesting and transforming data from various sources (relational, event-based, unstructured). Continuously monitor and troubleshoot data quality and integrity issues. Implement data governance processes and methods for managing metadata, access, and retention for internal and external users. Develop reliable, efficient, scalable, and quality data pipelines with monitoring and alert mechanisms using ETL/ELT tools or scripting languages. Develop physical data models and implement data storage architectures as per design guidelines. Analyze complex data elements and systems, data flow, dependencies, and relationships to contribute to conceptual, physical, and logical data models. Participate in testing and troubleshooting of data pipelines. Develop and operate large-scale data storage and processing solutions using distributed and cloud-based platforms (e.g., Data Lakes, Hadoop, Hbase, Cassandra, MongoDB, Accumulo, DynamoDB). Use agile development technologies, such as DevOps, Scrum, Kanban, and continuous improvement cycles, for data-driven applications. External Qualifications and Competencies Qualifications: College, university, or equivalent degree in a relevant technical discipline, or relevant equivalent experience required. This position may require licensing for compliance with export controls or sanctions regulations. Competencies: System Requirements Engineering: Translate stakeholder needs into verifiable requirements and establish acceptance criteria. Collaborates: Build partnerships and work collaboratively with others to meet shared objectives. Communicates Effectively: Develop and deliver multi-mode communications that convey a clear understanding of the unique needs of different audiences. Customer Focus: Build strong customer relationships and deliver customer-centric solutions. Decision Quality: Make good and timely decisions that keep the organization moving forward. Data Extraction: Perform ETL activities from various sources and transform them for consumption by downstream applications and users. Programming: Create, write, and test computer code, test scripts, and build scripts using industry standards and tools. Quality Assurance Metrics: Apply measurement science to assess whether a solution meets its intended outcomes. Solution Documentation: Document information and solutions based on knowledge gained during product development activities. Solution Validation Testing: Validate configuration item changes or solutions using best practices. Data Quality: Identify, understand, and correct flaws in data to support effective information governance. Problem Solving: Solve problems using systematic analysis processes and industry-standard methodologies. Values Differences: Recognize the value that different perspectives and cultures bring to an organization. Additional Responsibilities Unique to this Position Skills and Experience Needed: Must-Have: 3-5 years of experience in data engineering with a strong background in Azure Databricks and Scala/Python. Hands-on experience with Spark (Scala/PySpark) and SQL. Experience with SPARK Streaming, SPARK Internals, and Query Optimization. Proficiency in Azure Cloud Services. Agile Development experience. Unit Testing of ETL. Experience creating ETL pipelines with ML model integration. Knowledge of Big Data storage strategies (optimization and performance). Critical problem-solving skills. Basic understanding of Data Models (SQL/NoSQL) including Delta Lake or Lakehouse. Quick learner. Nice-to-Have: Understanding of the ML lifecycle. Exposure to Big Data open source technologies. Experience with SPARK, Scala/Java, Map-Reduce, Hive, Hbase, and Kafka. SQL query language proficiency. Experience with clustered compute cloud-based implementations. Familiarity with developing applications requiring large file movement for a cloud-based environment. Exposure to Agile software development. Experience building analytical solutions. Exposure to IoT technology. Work Schedule: Most of the work will be with stakeholders in the US, with an overlap of 2-3 hours during EST hours on a need basis.

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 12 Lacs

Chennai

Work from Office

As an Associate Software Developer at IBM, you'll work with clients to co-create solutions to major real-world challenges by using best practice technologies, tools, techniques, and products to translate system requirements into the design and development of customized systems Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Core Java, Spring Boot, Java 2/EE, Microsservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) Spark Good to have Python Preferred technical and professional experience None

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 6 Lacs

Mumbai

Work from Office

Hiring for Bigdata Hadoop Developer -- Mumbai Job Summary : We are seeking an experienced Big Data Hadoop Developer, Have strong expertise in the Hadoop ecosystem, with proven experience in managing and developing on Hadoop clusters. You will be responsible for designing, developing, optimizing, and maintaining big data solutions, as well as ensuring cluster health and performance. Key Responsibilities : Design and develop scalable big data solutions using Hadoop ecosystem tools such as HDFS, Hive, Pig, Sqoop, and MapReduce. Administer, configure, and optimize Hadoop clusters (Cloudera, Hortonworks, or Apache). Develop and maintain ETL pipelines to ingest, process, and analyze large datasets. Implement and monitor data security, backup, and recovery strategies on Hadoop clusters. Collaborate with data engineers, data scientists, and business analysts to deliver data solutions. Perform cluster performance tuning and troubleshoot issues across Hadoop services (YARN, HDFS, Hive, etc.). Write and optimize complex HiveQL and Spark jobs. Support production deployment and post-deployment monitoring. Required Skills and Qualifications : Bachelors degree in Computer Science, Engineering, or a related field. 3+ years of hands-on experience with the Hadoop ecosystem. Experience in Hadoop cluster setup, administration, and troubleshooting. Strong knowledge of Hive, HDFS, Pig, Sqoop, Oozie, and YARN. Experience with Spark, Kafka, and HBase is a plus. Strong programming skills in Java, Scala, or Python. Experience with Linux shell scripting and DevOps tools (e.g., Jenkins, Git). Familiarity with cloud platforms (AWS EMR, Azure HDInsight, or GCP Dataproc) is a plus. Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Pune, Gurugram

Work from Office

In one sentence We are seeking an experienced Kafka Administrator to manage and maintain our Apache Kafka infrastructure, with a strong focus on deployments within OpenShift and Cloudera environments. The ideal candidate will have hands-on experience with Kafka clusters, container orchestration, and big data platforms, ensuring high availability, performance, and security. What will your job look like? Install, configure, and manage Kafka clusters in production and non-production environments. Deploy and manage Kafka on OpenShift using Confluent for Kubernetes (CFK) or similar tools. Integrate Kafka with Cloudera Data Platform (CDP), including services like NiFi, HBase, and Solr. Monitor Kafka performance and implement tuning strategies for optimal throughput and latency. Implement and manage Kafka security using SASL_SSL, Kerberos, and RBAC. Perform upgrades, patching, and backup/recovery of Kafka environments. Collaborate with DevOps and development teams to support CI/CD pipelines and application integration. Troubleshoot and resolve Kafka-related issues in a timely manner. Maintain documentation and provide knowledge transfer to team members. All you need is... 5+ years of experience as a Kafka Administrator. 2+ years of experience deploying Kafka on OpenShift or Kubernetes. Strong experience with Cloudera ecosystem and integration with Kafka. Proficiency in Kafka security protocols (SASL_SSL, Kerberos). Experience with monitoring tools like Prometheus, Grafana, or Confluent Control Center. Solid understanding of Linux systems and shell scripting. Familiarity with CI/CD tools (Jenkins, GitLab CI, etc.). Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

10.0 - 15.0 years

12 - 17 Lacs

Bengaluru

Work from Office

Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 month ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Pune

Work from Office

As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies