5 years

0 Lacs

Posted:1 month ago| Platform: Linkedin logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

This role is for one of Weekday's clientsMin Experience: 5 yearsLocation: Remote (India)JobType: full-time

Requirements

REQUIREMENTS

  • Proficient in
    • Programming language: Python, PySpark , Scala
    • Azure Environment: Azure Data Factory, Databricks, Key Vault, DevOps CI CD
    • Storage/ Databases: ADLS Gen 2, Azure SQL DB, Delta Lake
    • Data Engineering: Apache Spark, Hadoop, optimization, performance tuning, Data modelling
    • Experience working with data sources such as Kafka and MongoDB is preferred
  • Experience with Automation of Test Cases of Big Data & ETL
  • Pipelines and Agile Methodology
  • Basic Understanding of ETL Pipelines
  • A strong understanding of AI, machine learning, and data science concepts is highly beneficial
  • Strong analytical and problem-solving skills with attention to detail
  • Ability to work independently and as part of a team in a fast-paced environment
  • Excellent communication skills, able to collaborate with both technical and non-technical stakeholders
  • Experience designing and implementing scalable and optimized data architectures followed by all best practices
  • Strong understanding of data warehousing concepts, data lakes, and data modeling
  • Familiarity with data governance, data quality, and privacy regulations

Key Responsibilities:

  • Data Pipeline Development: Design, develop, and maintain scalable and efficient data pipelines to collect, process, and store data from various sources (e.g., databases, APIs, third-party services)
  • Data Integration: Integrate and transform raw data into clean, usable formats for analytics and reporting, ensuring consistency, quality, and integrity
  • Data Warehousing: Build and optimize data warehouses to store structured and unstructured data, ensuring data is organized, reliable, and accessible
  • ETL Processes: Develop and manage ETL (Extract, Transform, Load) processes for data ingestion, cleaning, transformation, and loading into databases or data lakes
  • Performance Optimization: Monitor and optimize data pipeline performance to handle large volumes of data with low latency, ensuring reliability and scalability
  • Collaboration: Work closely with other product teams , TSO and business stakeholders to understand data requirements and ensure that data infrastructure supports analytical needs
  • Data Quality & Security: Ensure that data systems meet security and privacy standards, and implement best practices for data governance, monitoring, and error handling
  • Automation & Monitoring: Automate data workflows and establish monitoring systems to detect and resolve data issues proactively
  • Understand the broad architecture of the GEP's entire system as well as Analytics
  • Take full accountability for role, own development and results

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

bengaluru, karnataka, india

bengaluru, karnataka, india

navi mumbai, pune, mumbai (all areas)

gurugram, haryana, india

chennai, tamil nadu, india