Big Data Lead

8 - 13 years

25 - 40 Lacs

Bengaluru

Posted:2 days ago| Platform: Naukri logo

Apply

Skills Required

SCALA Big Data Spark Hive Apache Pig Hadoop Hadoop Development Mapreduce Hdfs Impala YARN

Work Mode

Hybrid

Job Type

Full Time

Job Description

Job Title / Primary Skill: Big Data Developer (Lead/Associate Manager) Management Level: G150 Years of Experience: 8 to 13 years Job Location: Bangalore (Hybrid) Must Have Skills: Big data, Spark, Scala, SQL, Hadoop Ecosystem. Educational Qualification: BE/BTech/ MTech/ MCA, Bachelor or masters degree in Computer Science, Job Overview Overall Experience 8+ years in IT, Software Engineering or relevant discipline. Designs, develops, implements, and updates software systems in accordance with the needs of the organization. Evaluates, schedules, and resources development projects; investigates user needs; and documents, tests, and maintains computer programs. Job Description: We look for developers to have good knowledge of Scala programming skills and Knowledge of SQL Technical Skills: Scala, Python -> Scala is often used for Hadoop-based projects, while Python and Scala are choices for Apache Spark-based projects. SQL -> Knowledge of SQL (Structured Query Language) is important for querying and manipulating data Shell Script -> Shell scripts are used for batch processing of data, it can be used for scheduling the jobs and shell scripts are often used for deploying applications Spark Scala -> Spark Scala allows you to write Spark applications using the Spark API in Scala Spark SQL -> It allows to work with structured data using SQL-like queries and Data Frame APIs. We can execute SQL queries against Data Frames, enabling easy data exploration, transformation, and analysis. The typical tasks and responsibilities of a Big Data Developer include: 1. Data Ingestion: Collecting and importing data from various sources, such as databases, logs, APIs into the Big Data infrastructure. 2. Data Processing: Designing data pipelines to clean, transform, and prepare raw data for analysis. This often involves using technologies like Apache Hadoop, Apache Spark. 3. Data Storage: Selecting appropriate data storage technologies like Hadoop Distributed File System (HDFS), HIVE, IMPALA, or cloud-based storage solutions (Snowflake, Databricks).

Mock Interview

Practice Video Interview with JobPe AI

Start Scala Interview Now
IQVIA
IQVIA

Pharmaceuticals / Biotechnology

Durham

approximately 70,000 Employees

188 Jobs

    Key People

  • Amit A. Bhasin

    Chief Financial Officer
  • Julian A. C. Stoeckel

    Executive Vice President, Corporate Strategy

RecommendedJobs for You

Chennai, Bengaluru, Mumbai (All Areas)

Chennai, Bengaluru, Mumbai (All Areas)

Hyderabad, Telangana, India