Posted:3 weeks ago|
Platform:
Work from Office
Full Time
Responsibilities Design and Implement Big Data solutions, complex ETL pipelines and data modernization projects. Required Past Experience: 6+ years of overall experience in developing, testing & implementing big data projects using Hadoop, Spark, Hive and Sqoop. Hands-on experience playing lead role in big data projects, responsible for implementing one or more tracks within projects, identifying and assigning tasks within the team and providing technical guidance to team members. Experience in setting up Hadoop services, implementing Extract transform and load/Extract load and transform (ETL/ELT) pipelines, working with Terabytes/Petabytes of data ingestion & processing from varied systems Experience working in onshore/offshore model, leading technical discussions with customers, mentoring and guiding teams on technology, preparing High-Level Design & Low-Level Design (HDD & LDD) documents. Required Skills and Abilities: Mandatory Skills Spark, Scala/Pyspark, Hadoop ecosystem including Hive, Sqoop, Impala, Oozie, Hue, Java, Python, SQL, Flume, bash (shell scripting) Secondary Skills Apache Kafka, Storm, Distributed systems, good understanding of networking, security (platform & data) concepts, Kerberos, Kubernetes Understanding of Data Governance concepts and experience implementing metadata capture, lineage capture, business glossary Experience implementing Continuous integration/Continuous delivery (CI/CD) pipelines and working experience with tools like Source code management (SCD) tools such as GIT, Bit bucket, etc. Ability to assign and manage tasks for team members, provide technical guidance, work with architects on High-Level Design, Low-Level Design (HDD & LDD) and Proof of concept. Hands on experience in writing data ingestion pipelines, data processing pipelines using spark and sql, experience in implementing slowly changing dimension (SCD) type 1 & 2, auditing, exception handling mechanism Data Warehousing projects implementation with either Java, or Scala based Hadoop programming background. Proficient with various development methodologies like waterfall, agile/scrum. Exceptional communication, organization, and time management skills Collaborative approach to decision-making & Strong analytical skills Good To Have - Certifications in any of GCP, AWS or Azure, Cloudera' Work on multiple Projects simultaneously, prioritizing appropriately
Smartavya Analytica
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Smartavya Analytica
Mumbai
10.0 - 16.0 Lacs P.A.
11.0 - 15.0 Lacs P.A.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
20.0 - 25.0 Lacs P.A.
10.0 - 15.0 Lacs P.A.
Pune, Chennai, Bengaluru
19.0 - 22.5 Lacs P.A.
Ahmedabad
12.0 - 16.0 Lacs P.A.
Bengaluru
16.0 - 18.0 Lacs P.A.
Bengaluru
10.0 - 15.0 Lacs P.A.
Bengaluru
13.0 - 15.0 Lacs P.A.
12.0 - 16.0 Lacs P.A.