Data Engineer-Data Platforms

5 years

0 Lacs

Posted:1 week ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Introduction

A career in IBM Consulting is built on long-term client relationships and close collaboration worldwide. You’ll work with leading companies across industries, helping them shape their hybrid cloud and AI journeys. With support from our strategic partners, robust IBM technology, and Red Hat, you’ll have the tools to drive meaningful change and accelerate client impact. At IBM Consulting, curiosity fuels success. You’ll be encouraged to challenge the norm, explore new ideas, and create innovative solutions that deliver real results. Our culture of growth and empathy focuses on your long-term career development while valuing your unique skills and experiences.

Your Role And Responsibilities

  • As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs.
  • Your primary responsibilities include:
  • Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements.
  • Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization.
  • Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too.

Required Technical And Professional Expertise

  • Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python
  • Hbase, Hive Good to have Aws -S3,
  • athena ,Dynomo DB, Lambda, Jenkins GIT
  • Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine).
  • Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations.

Preferred Technical And Professional Experience

  • Understanding of Devops.
  • Experience in building scalable end-to-end data ingestion and processing solutions
  • Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
IBM logo
IBM

Information Technology

Armonk

RecommendedJobs for You

navi mumbai, maharashtra, india

mumbai metropolitan region

bengaluru east, karnataka, india

bengaluru east, karnataka, india

hyderabad, telangana, india

navi mumbai, maharashtra, india