Data Engineer – Platform & Analytics Infrastructure

5 - 8 years

0 Lacs

Posted:13 hours ago| Platform: GlassDoor logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Immediate Hiring – Walbrydge Technologies

Role Overview

Walbrydge Technologies is hiring Data Engineers to design and build the core data infrastructure that powers real-time analytics, operational intelligence, and machine learning pipelines across our enterprise ecosystem.

This role is critical to enabling high-quality, scalable, and reliable data platforms that support real-time dashboards, analytics, and future AI-driven capabilities.

Position Details

Role: Data Engineer – Platform & Analytics Infrastructure
Experience: 5–8 Years
Positions: 3
Location: Hyderabad, India(Offshore)
Employment Type: Full-Time
Start Date: 1-2 Weeks

Interested candidates may send their resume to: info@walbrydge.com

Key Responsibilities

Data Platform & Pipeline Engineering

  • Design and implement end-to-end data pipelines supporting both batch and real-time streaming workloads.
  • Build and maintain ETL / ELT workflows using modern data engineering tools.
  • Architect and manage data lakes, data warehouses, and real-time event pipelines.

Data Ingestion & Processing

  • Develop scalable ingestion pipelines from distributed microservices, APIs, and event streams.
  • Implement real-time data processing using Kafka, Spark Streaming, Flink, or equivalent technologies.
  • Optimize pipelines for latency, throughput, and fault tolerance.

Data Quality, Governance & Optimization

  • Implement data quality checks, validation rules, lineage tracking, and governance standards.
  • Design efficient data models, partitioning strategies, and schema evolution.
  • Optimize storage and compute for performance, scalability, and cost efficiency.

Cross-Functional Collaboration

  • Work closely with ML Engineers, Data Scientists, Backend Engineers, and DevOps teams.
  • Support analytics, reporting, and ML feature pipelines with clean, reliable datasets.
  • Ensure security, access control, and compliance across all data systems.

Required Skills & Experience

Core Technical Skills

  • 5–8 years of professional experience in Data Engineering.
  • Strong proficiency in Python and SQL.
  • Hands-on experience with ETL/ELT orchestration tools such as Airflow, dbt, or similar.
  • Experience building streaming and event-driven pipelines using Kafka, Kinesis, Spark Streaming, or Flink.

Data Storage & Analytics Platforms

  • Strong experience with data warehouses such as:
  • Snowflake, Redshift, BigQuery, Azure Synapse, or Databricks
  • Hands-on experience with data lakes:
  • AWS S3, Azure Data Lake (ADLS), or Google Cloud Storage (GCS)

Platform & Architecture Knowledge

  • Strong understanding of data modeling, schema design, partitioning, and indexing.
  • Experience with API-based ingestion and real-time event streaming.
  • Familiarity with CI/CD pipelines, containerization, and cloud-native architectures.

Job Types: Full-time, Permanent

Benefits:

  • Flexible schedule
  • Food provided
  • Health insurance
  • Leave encashment
  • Paid sick time
  • Paid time off
  • Provident Fund
  • Work from home

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start DevOps Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now