Home
Jobs

6 - 10 years

10 - 16 Lacs

Posted:3 weeks ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Responsibilities Design and Implement Big Data solutions, complex ETL pipelines and data modernization projects. Required Past Experience: 6+ years of overall experience in developing, testing & implementing big data projects using Hadoop, Spark, Hive and Sqoop. Hands-on experience playing lead role in big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members. Experience in setting up Hadoop services, implementing Extract transform and load/Extract load and transform (ETL/ELT) pipelines, working with Terabytes/Petabytes of data ingestion & processing from varied systems Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing High-Level Design & Low-Level Design (HDD & LDD) documents. Required Skills and Abilities: Mandatory Skills Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting) Secondary Skills Apache Kafka, Storm, Distributed systems, good understanding of networking, security (platform & data) concepts, Kerberos, Kubernetes Understanding of Data Governance concepts and experience implementing metadata capture, lineage capture, business glossary Experience implementing Continuous integration/Continuous delivery (CI/CD) pipelines and working experience with tools like Source code management (SCD) tools such as GIT, Bit bucket, etc. Ability to assign and manage tasks for team members, provide technical guidance, work with architects on High-Level Design, Low-Level Design (HDD & LDD) and Proof of concept. Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and sql, experience in implementing slowly changing dimension (SCD) type 1 & 2, auditing, exception handling mechanism Data Warehousing projects implementation with either Java, or Scala based Hadoop programming background. Proficient with various development methodologies like waterfall, agile/scrum. Exceptional communication, organization, and time management skills Collaborative approach to decision-making & Strong analytical skills Good To Have - Certifications in any of GCP, AWS or Azure, Cloudera' Work on multiple Projects simultaneously, prioritizing appropriately

Mock Interview

Practice Video Interview with JobPe AI

Start Hadoop Interview Now

My Connections Smartavya Analytica

Download Chrome Extension (See your connection in the Smartavya Analytica )

chrome image
Download Now
Smartavya Analytica
Smartavya Analytica

Data Analytics / Technology

N/A

51-200 Employees

28 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    Chief Analytics Officer

RecommendedJobs for You

Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru