Big Data Developer

4 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Experience

: 4.00 + years

Salary

: Confidential (based on experience)

Expected Notice Period

: 15 Days

Shift

: (GMT+05:30) Asia/Kolkata (IST)

Opportunity Type

: Office (Pune)

Placement Type

: Full Time Permanent position(Payroll and Compliance to be managed by: Nexus Cognitive)

(*Note: This is a requirement for one of Uplers' client - Nexus Cognitive)What do you need for this opportunity?Must have skills required:4–8 years of experience in big data engineering or application modernization in enterprise settings., MapR, or Hadoop ecosystems, Prior experience with Cloudera, Strong understanding of distributed data architectures and data lake design principles., transitioning to open-source frameworks., Airflow, Avro, Core Frameworks: Apache Spark, Data Formats & Storage: Parquet, dbt, HDFS, Hive, Iceberg, NIFI, Oozie, ORC, Orchestration & Workflow: Airflow, Programming languages: python, PySpark, S3, Scala, Java, KafkaNexus Cognitive is Looking for:Role Summary / Purpose

The Big Data Developer plays a key role in the modernization of data ecosystem — supporting the migration of legacy MAPR/Cloudera/Hortonworks applications to open-source frameworks compatible with NexusOne.This individual will focus on

refactoring, optimizing, and validating

data processing pipelines, ensuring performance, scalability, and alignment with enterprise data standards.The role requires strong technical expertise across distributed data systems, open-source frameworks, and hybrid data environments.

Core Responsibilities

  • Analyze, refactor, and modernize Spark/MapReduce/Hive/Tez jobs for execution within NexusOne’s managed Spark and Trino environments.
  • Design, build, and optimize batch and streaming pipelines using Spark, NiFi, and Kafka.
  • Convert existing ETL jobs and DAGs from Cloudera/MAPR ecosystems to open-source equivalents.
  • Collaborate with Data Engineers and Architects to define new data ingestion and transformation patterns.
  • Tune performance across large-scale data processing workloads (partitioning, caching, resource allocation).
  • Implement data quality and validation frameworks to ensure consistency during migration.
  • Support code reviews, performance tests, and production readiness validation for migrated workloads.
  • Document conversion approaches, dependencies, and operational runbooks.
  • Partner with Wells Fargo application SMEs to ensure domain alignment and business continuity.

Key Skills & Tools

  • Core Frameworks: Apache Spark, PySpark, Airflow, NiFi, Kafka, Hive, Iceberg, Oozie
  • Programming Languages: Python, Scala, Java
  • Data Formats & Storage: Parquet, ORC, Avro, S3, HDFS
  • Orchestration & Workflow: Airflow, DBT
  • Performance Optimization: Spark tuning, partitioning strategies, caching, YARN/K8s resource tuning
  • Testing & Validation: Great Expectations, Deequ, SQL-based QA frameworks
  • Observability & Monitoring: Datadog, Grafana, Prometheus

Ideal Background

  • 4–8 years of experience in big data engineering or application modernization in enterprise settings.
  • Prior experience with Cloudera, MAPR, or Hadoop ecosystems, transitioning to open-source frameworks.
  • Strong understanding of distributed data architectures and data lake design principles.
  • Exposure to hybrid or cloud-native environments (AWS, GCP, or Azure).
  • Familiarity with regulated environments (financial services, telecom, healthcare) is a plus.

Success Criteria

  • Successful refactoring and execution of legacy data pipelines within NexusOne environments.
  • Measurable performance improvements (execution time, cost optimization, data quality metrics).
  • Delivered migration artifacts — including conversion patterns, reusable scripts, and playbooks.
  • Positive feedback from Wells Fargo application owners on migration support and knowledge transfer.
  • Consistent adherence to coding standards, documentation, and change management practices.

How to apply for this opportunity?

  • Step 1: Click On Apply! And Register or Login on our portal.
  • Step 2: Complete the Screening Form & Upload updated Resume
  • Step 3: Increase your chances to get shortlisted & meet the client for the Interview!

About Uplers:

Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement.(Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well).So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Uplers logo
Uplers

Digital Services

Ahmedabad

RecommendedJobs for You

pune, maharashtra, india

pune, maharashtra, india

hyderabad, pune, bengaluru