575 Hdfs Jobs - Page 7

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

12.0 - 19.0 years

15 - 25 Lacs

bengaluru

Work from Office

What youll do: We seek Software Engineers with experience building and scaling services in on-premises and cloud environments. As a Principal Software Engineer in the Epsilon Attribution/Forecasting Product Development team, you will design, implement, and optimize data processing solutions using Scala, Spark, and Hadoop. Collaborate with cross-functional teams to deploy big data solutions on our on-premises and cloud infrastructure along with building, scheduling and maintaining workflows. Perform data integration and transformation, troubleshoot issues, Document processes, communicate technical concepts clearly, and continuously enhance our attribution engine/forecasting engine. Strong wri...

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 12 Lacs

chennai

Work from Office

Role Description We are looking for a Senior Data Warehouse Engineer with good designing skills to work with Conversant Engineering Data Warehousing teams which deals in data of petabyte scale. This role will help to define solutions for given product and engineering initiatives , and drive solutions to delivery by effectively engaging with team members across the globe. The person in this role will be working closely with key stakeholders throughout the organization , so he or she must be able to communicate and keep them informed on overall health on projects impacting Conversant Data Warehouse platforms. He should also be able to mentor Juniors in the Teams. What you ll need Lead , design...

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 12 Lacs

bengaluru

Work from Office

Role Description We are looking for a Senior Data Warehouse Engineer with good designing skills to work with Conversant Engineering Data Warehousing teams which deals in data of petabyte scale. This role will help to define solutions for given product and engineering initiatives , and drive solutions to delivery by effectively engaging with team members across the globe. The person in this role will be working closely with key stakeholders throughout the organization , so he or she must be able to communicate and keep them informed on overall health on projects impacting Conversant Data Warehouse platforms. He should also be able to mentor Juniors in the Teams. What you ll need Lead , design...

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 12 Lacs

hyderabad

Work from Office

Role Description We are looking for a Senior Data Warehouse Engineer with good designing skills to work with Conversant Engineering Data Warehousing teams which deals in data of petabyte scale. This role will help to define solutions for given product and engineering initiatives , and drive solutions to delivery by effectively engaging with team members across the globe. The person in this role will be working closely with key stakeholders throughout the organization , so he or she must be able to communicate and keep them informed on overall health on projects impacting Conversant Data Warehouse platforms. He should also be able to mentor Juniors in the Teams. What you ll need Lead , design...

Posted 1 month ago

AI Match Score
Apply

3.0 - 8.0 years

8 - 12 Lacs

bengaluru, palem

Work from Office

This position isresponsible for hands-on design&implementation expertiseinSpark and Python (PySpark) along with other Hadoop ecosystems like HDFS, Hive, Hue, Impala, Zeppelinetc. The purpose of positionincludes- Analysis, design and implementation of business requirements using SPARK& Python. Cloudera Hadoopdevelopment around Big Data. Solid SQL experience. Development experience with PySpark&SparkSql with good analytical & debugging skills. Development work for building new solutions around Hadoop and automation of operational tasks. Assisting team and troubleshooting issues. Required Skills PYTHON , HADOOP , SQL , SHELL SCRIPTING, Spark SQL

Posted 1 month ago

AI Match Score
Apply

3.0 - 8.0 years

8 - 12 Lacs

bengaluru, palem

Work from Office

This position isresponsible for hands-on design&implementation expertiseinSpark and Python (PySpark) along with other Hadoop ecosystems like HDFS, Hive, Hue, Impala, Zeppelinetc. The purpose of positionincludes- Analysis, design and implementation of business requirements using SPARK& Python. Cloudera Hadoopdevelopment around Big Data. Solid SQL experience. Development experience with PySpark&SparkSql with good analytical & debugging skills. Development work for building new solutions around Hadoop and automation of operational tasks. Assisting team and troubleshooting issues. Required Skills PYTHON , HADOOP , SQL , SHELL SCRIPTING, Spark SQL

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

10 - 20 Lacs

hyderabad

Hybrid

Scala & Spark - Intermediate/Advanced level is a mandate. Data structures and algorithms in Scala. Minimum 4+ years of experience in Scala/Spark. 4. Need to have a big-data background

Posted 1 month ago

AI Match Score
Apply

6.0 - 11.0 years

10 - 20 Lacs

hyderabad

Hybrid

JD:- Total Yrs. of Experience* 6+ Relevant Yrs. of experience* 6+ Detailed JD *(Roles and Responsibilities) 6+ years experience in Kafka, Spark, Scala ,Pyspark Strong knowledge of implementing and maintaining scalable streaming and batch data solutions with high throughput and strict SLA. Extensive experience of working in production with Kafka, Spark and Kafka Stream including best practices, observability, optimization, performance tuning. Knowledge of Python and Scala including best practices, architecture, dependency management Collaborate with cross-functional teams to understand data requirements and integrate data from multiple sources, ensuring data consistency and quality Mandatory ...

Posted 1 month ago

AI Match Score
Apply

3.0 - 6.0 years

3 - 6 Lacs

pune, maharashtra, india

On-site

Implementation experience on building large scale data applications from scratch (initial stages) Programming experience in Python or Java Good experience of deploying applications on AWS and usage of its services Must have experience with Hadoop Distributed File System (HDFS), Amazon Simple Storage Service (S3) Must have good experience on SQL Data organization in Data Lake (experience in Delta Lake or Databricks is added advantage) Detailed understanding of Data pipeline creation Detailed experience of Data ingestion Techno functional experience working with technical team (data engineering / data science) and the business (functional) teams

Posted 1 month ago

AI Match Score
Apply

3.0 - 6.0 years

3 - 6 Lacs

gurgaon, haryana, india

On-site

Implementation experience on building large scale data applications from scratch (initial stages) Programming experience in Python or Java Good experience of deploying applications on AWS and usage of its services Must have experience with Hadoop Distributed File System (HDFS), Amazon Simple Storage Service (S3) Must have good experience on SQL Data organization in Data Lake (experience in Delta Lake or Databricks is added advantage) Detailed understanding of Data pipeline creation Detailed experience of Data ingestion Techno functional experience working with technical team (data engineering / data science) and the business (functional) teams

Posted 1 month ago

AI Match Score
Apply

3.0 - 6.0 years

3 - 6 Lacs

bengaluru, karnataka, india

On-site

Implementation experience on building large scale data applications from scratch (initial stages) Programming experience in Python or Java Good experience of deploying applications on AWS and usage of its services Must have experience with Hadoop Distributed File System (HDFS), Amazon Simple Storage Service (S3) Must have good experience on SQL Data organization in Data Lake (experience in Delta Lake or Databricks is added advantage) Detailed understanding of Data pipeline creation Detailed experience of Data ingestion Techno functional experience working with technical team (data engineering / data science) and the business (functional) teams

Posted 1 month ago

AI Match Score
Apply

8.0 - 11.0 years

15 - 27 Lacs

noida, mumbai, pune

Hybrid

About the Role We are seeking a highly skilled Lead Data Engineer to define and drive the data migration strategy from legacy RDBMS platforms to PostgreSQL for a mission-critical billing and invoicing system. This role requires a hands-on technical leader who can design the migration architecture, oversee implementation, and ensure seamless transition without disruption to core business functions. The ideal candidate will have deep expertise in data migration frameworks, large-scale distributed processing (Spark preferred), and experience with both open-source and AWS ecosystem tools. They will work closely with business stakeholders and customers to manage expectations, deliver with precisi...

Posted 1 month ago

AI Match Score
Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

As a Streamsets ETL Developer at Deutsche Bank, AVP, your role involves being part of the integration team responsible for developing, testing, and maintaining robust and scalable ETL processes to support data integration initiatives. You will have a unique opportunity to contribute to the strategic future state technology landscape for all DWS Corporate Functions globally. Your responsibilities include: - Working with clients to deliver high-quality software within an agile development lifecycle - Developing and thoroughly testing ETL solutions/pipelines - Defining and evolving architecture components, contributing to architectural decisions at a department and bank-wide level - Taking end-...

Posted 1 month ago

AI Match Score
Apply

3.0 - 6.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We're devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being.Bank of ...

Posted 1 month ago

AI Match Score
Apply

3.0 - 7.0 years

5 - 9 Lacs

hyderabad, chennai, bengaluru

Work from Office

Big Data DeveLoper Essential functions Design and implement data pipelines for migration from HDFS/Hive to cloud object storage (e.g., S3, Ceph). Optimize Spark (and optionally Flink) jobs for performance and scalability in a Kubernetes environment. Ensure data consistency, schema evolution, and governance with Apache Iceberg or equivalent table formats. Support migration strategy definition by providing technical input and identifying risks. Mentor junior developers and review their code / design decisions. Collaborate with platform engineers, cloud architects, and product stakeholders to align technical implementation with project goals. Troubleshoot complex distributed system issues in da...

Posted 1 month ago

AI Match Score
Apply

8.0 - 10.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Sigmoid empowers enterprises to make smarter, data-driven decisions by blending advanced data engineering with AI consulting. We collaborate with some of the world's leading data-rich organizations across sectors such as CPG-retail, BFSI, life sciences, manufacturing, and more to solve complex business challenges. Our global team specializes in cloud data modernization, predictive analytics, generative AI, and DataOps, supported by 10+ delivery centers and innovation hubs, including a major global presence in Bengaluru and operations across the USA, Canada, UK, Netherlands, Poland, Singapore, and India. Recognized as a leader in the data and analytics space, Sigmoid is backed by Peak XV Part...

Posted 1 month ago

AI Match Score
Apply

5.0 - 9.0 years

10 - 18 Lacs

pune

Work from Office

experience in big data development and data engineering. - Proficiency in Java and experience with Apache Spark. - Experience in API development and integration. - Strong understanding of data engineering principles and big data concepts. Required Candidate profile - Familiarity with big data tools such as Hadoop, HDFS, Hive, HBase, and Kafka. - Experience with SQL and NoSQL databases. - Strong communication and collaboration skills

Posted 1 month ago

AI Match Score
Apply

2.0 - 4.0 years

2 - 4 Lacs

hyderabad, chennai, bengaluru

Work from Office

Job Summary: We are seeking an experienced Oracle GoldenGate Specialist to join our team and lead the design, implementation, and maintenance of real-time data replication and integration solutions using Oracle GoldenGate. The ideal candidate will have a deep understanding of GoldenGate architecture, experience in high-availability environments, and strong problem-solving skills. Key Responsibilities: Design, implement, and manage Oracle GoldenGate replication across heterogeneous and homogeneous systems. Monitor and maintain the health and performance of GoldenGate replication, addressing latency and data integrity issues. Develop custom solutions for data replication, transformation, and f...

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 12 Lacs

hyderabad

Work from Office

GCP Data Engineers at N Consulting Ltd We re Hiring GCP Data Engineers Join our growing team and work on cutting-edge data engineering projects in the cloud! Work Locations: Chennai / Bangalore / Pune / Hyderabad Experience: 5+ years Skills Required: Expertise in GCP ecosystem BigQuery, Dataflow, Composer, Pub/Sub, DataProc Strong proficiency in Python, PySpark, SQL Knowledge of HDFS, Hadoop and large-scale data migration/engineering Responsibilities: Design, build, and optimize data pipelines on GCP Lead and deliver end-to-end data engineering migration projects independently Collaborate with stakeholders to build scalable and reliable cloud data solutions

Posted 1 month ago

AI Match Score
Apply

5.0 - 10.0 years

7 - 12 Lacs

hyderabad

Work from Office

We re Hiring GCP Data Engineers Join our growing team and work on cutting-edge data engineering projects in the cloud! Work Locations: Chennai / Bangalore / Pune / Hyderabad Experience: 5+ years Skills Required: Expertise in GCP ecosystem BigQuery, Dataflow, Composer, Pub/Sub, DataProc Strong proficiency in Python, PySpark, SQL Knowledge of HDFS, Hadoop and large-scale data migration/engineering Responsibilities: Design, build, and optimize data pipelines on GCP Lead and deliver end-to-end data engineering & migration projects independently Collaborate with stakeholders to build scalable and reliable cloud data solutions

Posted 1 month ago

AI Match Score
Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Role Overview: You will be working as a Lead Spark Scala Engineer under ICG TTS Operations Technology (OpsTech) Group to help in the implementation of the next generation Digital Automation Platform and Imaging Workflow Technologies. Your role will involve managing development teams in the distributed systems Eco-System and being a strong team player. It is expected that you will have superior technical knowledge of current programming languages, technologies, and other leading-edge development tools to contribute to applications, systems analysis, and programming activities. Key Responsibilities: - Develop, test, and deploy production-grade Spark applications in Scala, ensuring optimal perf...

Posted 1 month ago

AI Match Score
Apply

6.0 - 11.0 years

15 - 30 Lacs

bengaluru

Work from Office

7+ years of data warehousing/engineering, software solutions design and development experience. • Experience in designing and architecting distributed data systems • Code, test, and document new or modified data systems to create robust and scalable applications for data analytics. • Work with other Bigdata developers to make sure that all data solutions are consistent. • Partner with business community to understand requirements, determine training needs and deliver user training sessions • Perform technology and product research to better define requirements, resolve important issues and improve the overall capability of the analytics technology stack. • Evaluate and provides feedback on f...

Posted 1 month ago

AI Match Score
Apply

4.0 - 5.0 years

1 - 13 Lacs

bengaluru

Work from Office

Responsibilities: * Design, develop & maintain data pipelines using Python, Hadoop, Spark & PostgreSQL. * Collaborate with cross-functional teams on ETL projects & Git version control. Health insurance Provident fund

Posted 1 month ago

AI Match Score
Apply

6.0 - 11.0 years

8 - 14 Lacs

bengaluru

Work from Office

SUMMARY Summary We are seeking experienced Palantir Foundry Professionals to join our team in Bangalore. The ideal candidate will have a strong background in Palantir Foundry, with a minimum of 5 years of relevant experience. This role requires a deep understanding of data integration, application development, and cloud-based deployment. The successful candidate will be responsible for troubleshooting and optimizing application performance, and will work closely with our team to deliver high-quality solutions. Responsibilities Develop and implement data integration solutions using Palantir Foundry Design and deploy cloud-based applications using Agile methodologies Troubleshoot and optimize ...

Posted 1 month ago

AI Match Score
Apply

5.0 - 8.0 years

0 - 3 Lacs

purandhar

Hybrid

Role & responsibilities Experience designing and deploying large scale distributed data processing systems with few technologies such as PostgreSQL or equivalent databases, SQL, Hadoop, Spark, Tableau Proven ability to define and build architecturally sound solution designs. Demonstrated ability to rapidly build relationships with key stakeholders. Experience of automated unit testing, automated integration testing and a fully automated build and deployment process as part of DevOps tooling. Must have the ability to understand and develop the logical flow of applications on technical code level. Strong interpersonal skills and ability to work in a team and in global environments. Should be p...

Posted 1 month ago

AI Match Score
Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies