Home
Jobs

5 Hue Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

11 - 15 Lacs

Pune

Work from Office

Naukri logo

We at Onix Datametica Solutions Private Limited are looking for Bigdata Lead who have a passion for cloud with knowledge of different on-premise and cloud Data implementation in the field of Big Data and Analytic s including and not limiting to Teradata, Netezza, Exadata, Oracle, Cloudera, Hortonworks and alike. Ideal candidates should have technical experience in migrations and the ability to help customers get value from Datametica's tools and accelerators Job Description 6+ years of overall experience in developing, testing implementing Big data projects using Hadoop, Spark, Hive Hands-on experience playing lead role in Big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members Experience in setting up Hadoop services, implementing ETL/ELT pipelines, working with Terabytes of data ingestion processing from varied systems Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing HDD LDD documents Required Skills and Abilities: Mandatory Skills Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting) Experience implementing CICD pipelines and working experience with tools like SCM tools such as GIT, Bit bucket, etc Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and SQL, experience in implementing SCD type 1 2, auditing, exception handling mechanism Data Warehousing projects implementation with either, Scala or Hadoop programming background Proficient with various development methodologies like waterfall, agile/scrum Exceptional communication, organisation, and time management skills Collaborative approach to decision-making Strong analytical skills Good To Have - Certifications in any of GCP, AWS or Azure, Cloud era Work on multiple Projects simultaneously, prioritising appropriately

Posted 2 weeks ago

Apply

3.0 - 5.0 years

12 - 13 Lacs

Thane, Navi Mumbai, Pune

Work from Office

Naukri logo

We at Acxiom Technologies are hiring for Pyspark Developer for Mumbai Location Relevant Experience : 1 to 4 Years Location : Mumbai Mode of Work : Work From Office Notice Period : Upto 20 days. Job Description: Proven experience as a Pyspark Developer . Hands-on expertise with AWS Redshift . Strong proficiency in Pyspark , Spark , Python , and Hive . Solid experience with SQL . Excellent communication skills. Benefits of working at Acxiom: - Statutory Benefits - Paid Leaves - Phenomenal Career Growth - Exposure to Banking Domain About Acxiom Technologies: Acxiom Technologies is a leading software solutions services company that provides consulting services to global firms and has established itself as one of the most sought-after consulting organizations in the field of Data Management and Business Intelligence. Also here is our website address https://www.acxtech.co.in/ to give you a detailed overview of our company. Interested Candidates can share their resumes on 7977418669 Thank you.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

10 - 16 Lacs

Mumbai

Work from Office

Naukri logo

Responsibilities Design and Implement Big Data solutions, complex ETL pipelines and data modernization projects. Required Past Experience: 6+ years of overall experience in developing, testing & implementing big data projects using Hadoop, Spark, Hive and Sqoop. Hands-on experience playing lead role in big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members. Experience in setting up Hadoop services, implementing Extract transform and load/Extract load and transform (ETL/ELT) pipelines, working with Terabytes/Petabytes of data ingestion & processing from varied systems Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing High-Level Design & Low-Level Design (HDD & LDD) documents. Required Skills and Abilities: Mandatory Skills Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting) Secondary Skills Apache Kafka, Storm, Distributed systems, good understanding of networking, security (platform & data) concepts, Kerberos, Kubernetes Understanding of Data Governance concepts and experience implementing metadata capture, lineage capture, business glossary Experience implementing Continuous integration/Continuous delivery (CI/CD) pipelines and working experience with tools like Source code management (SCD) tools such as GIT, Bit bucket, etc. Ability to assign and manage tasks for team members, provide technical guidance, work with architects on High-Level Design, Low-Level Design (HDD & LDD) and Proof of concept. Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and sql, experience in implementing slowly changing dimension (SCD) type 1 & 2, auditing, exception handling mechanism Data Warehousing projects implementation with either Java, or Scala based Hadoop programming background. Proficient with various development methodologies like waterfall, agile/scrum. Exceptional communication, organization, and time management skills Collaborative approach to decision-making & Strong analytical skills Good To Have - Certifications in any of GCP, AWS or Azure, Cloudera' Work on multiple Projects simultaneously, prioritizing appropriately

Posted 3 weeks ago

Apply

5 - 9 years

5 - 12 Lacs

Hyderabad

Work from Office

Naukri logo

Skill/Competency Requirements: Need to have experience in Big Data technologies and support projects. Able to quickly analyze complex SQL queries and grasp underlying functionality and able to provide RCA Well experienced in working with Hive scripts Experienced in working on Big Data technologies like Hive, Hue, Oozie. Experienced in working on L2/L3 support

Posted 3 months ago

Apply

6 - 11 years

0 - 3 Lacs

Bengaluru

Work from Office

Naukri logo

SUMMARY This is a remote position. Job Description: EMR Admin We are seeking an experienced EMR Admin with expertise in Big data services such as Hive, Metastore, H-base, and Hue. The ideal candidate should also possess knowledge in Terraform and Jenkins. Familiarity with Kerberos and Ansible tools would be an added advantage, although not mandatory. Additionally, candidates with Hadoop admin skills, proficiency in Terraform and Jenkins, and the ability to handle EMR Admin responsibilities are encouraged to apply. Location: Remote Experience: 6+ years Must-Have: The candidate should have 4 years in EMR Admin. Requirements Requirements: Proven experience in EMR administration Proficiency in Big data services including Hive, Metastore, H-base, and Hue Knowledge of Terraform and Jenkins Familiarity with Kerberos and Ansible tools (preferred) Experience in Hadoop administration (preferred)

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies