Home
Jobs

Big Data Developer - Spark/Hadoop

7 - 10 years

8 - 12 Lacs

Posted:2 weeks ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

YearsType of Employment : Permanent Requirement : Immediate or Max 15 days Big Data Developer (Hadoop/Spark/Kafka) - This role is ideal for an experienced Big Data developer who is confident in taking complete ownership of the software development life cycle - from requirement gathering to final deployment. - The candidate will be responsible for engaging with stakeholders to understand the use cases, translating them into functional and technical specifications (FSD & TSD), and implementing scalable, efficient big data solutions.- A key part of this role involves working across multiple projects, coordinating with QA/support engineers for test case preparation, and ensuring deliverables meet high-quality standards. - Strong analytical skills are necessary for writing and validating SQL queries, along with developing optimized code for data processing workflows. - The ideal candidate should also be capable of writing unit tests and maintaining documentation to ensure code quality and maintainability.- The role requires hands-on experience with the Hadoop ecosystem, particularly Spark (including Spark Streaming), Hive, Kafka, and Shell scripting. - Experience with workflow schedulers like Airflow is a plus, and working knowledge of cloud platforms (AWS, Azure, GCP) is beneficial. - Familiarity with Agile methodologies will help in collaborating effectively in a fast-paced team environment.- Job scheduling and automation via shell scripts, and the ability to optimize performance and resource usage in a distributed system, are critical. - Prior experience in performance tuning and writing production-grade code will be valued.- The candidate must demonstrate strong communication skills to effectively coordinate with business users, developers, and testers, and to manage dependencies across teams. Key Skills Required : Must Have : - Hadoop, Spark (core & streaming), Hive, Kafka, Shell Scripting, SQL, TSD/FSD documentation. Good to Have : - Airflow, Scala, Cloud (AWS/Azure/GCP), Agile methodology.This role is both technically challenging and rewarding, offering the opportunity to work on large-scale, real-time data processing systems in a dynamic, agile environment.

Mock Interview

Practice Video Interview with JobPe AI

Start Spark Interview Now

My Connections Maimsd Technology

Download Chrome Extension (See your connection in the Maimsd Technology )

chrome image
Download Now
Maimsd Technology
Maimsd Technology

Technology / Software

Silicon Valley

50-100 Employees

222 Jobs

    Key People

  • Alice Johnson

    CEO
  • Bob Smith

    CTO

RecommendedJobs for You

Pune, Maharashtra, India