Technical Architect - Data Engineering

12 - 20 years

30 - 45 Lacs

Posted:1 day ago| Platform: Naukri logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Job Title:

Work Mode:

Role Overview

We are seeking a Technical Architect Data Engineering to lead the design and implementation of scalable, high-performance data solutions using Kafka, MongoDB, and Python. The role demands a strong technical leader who can architect, guide, and implement modern data pipelines, real-time streaming architectures, and enterprise data integration solutions.

Collaborate closely with U.S. based product owners, architects, and delivery managers, ensuring optimal overlap with the CST time zone to drive technical excellence.

Key Responsibilities

  • Architect and implement modern data pipelines for real-time and batch processing using Kafka, Python, and MongoDB.
  • Partner with onshore architects and product owners to translate analytical and integration needs into robust data engineering solutions.
  • Define data ingestion, transformation, and storage strategies, ensuring scalability and security across large datasets.
  • Establish data standards, reusable frameworks, and governance models for consistent implementation.
  • Lead he design and optimization of distributed data systems to handle high throughput and low latency workloads.
  • Evaluate and integrate cloud-native services, third-party APIs, and automation tools to enhance system reliability and speed.
  • Oversee data quality, lineage, and performance tuning activities across environments.
  • Collaborate with DevOps, QA, and Analytics teams for continuous integration and delivery of data solutions.
  • Mentor developers and data engineers on architecture patterns, best practices, and emerging technologies in data engineering.
  • Contribute to solution proposals, technical documentation, and estimation activities for ongoing and upcoming initiatives.
  • Ensure availability, scalability, and compliance of data systems with enterprise and customer standards.

Technical Skills Required

Core Data Engineering & Backend:

  • Strong expertise in Python for ETL, data transformation, and automation.
  • Deep understanding of Kafka (Producers, Consumers, Streams, Connect, Schema Registry) and event-driven architectures.
  • Experience designing real-time data pipelines and microservices for data movement and processing.
  • Proficiency in RESTful API development and integration using Python frameworks (Flask, FastAPI).
  • Hands-on experience in data modelling, performance optimization, and schema design for both structured and semi-structured data.

Database & Storage:

  • Advanced knowledge of MongoDB (NoSQL modelling, aggregation, indexing, replication, sharding).
  • Experience with relational databases (PostgreSQL / MySQL / SQL Server) for hybrid workloads.
  • Exposure to data lake architectures (S3, Delta Lake, or equivalent).
  • Strong understanding of ETL/ELT tools and data warehousing concepts.

Cloud & DevOps:

  • Experience deploying data solutions on AWS.
  • Proficiency with CI/CD pipelines, Git, Docker, Kubernetes, and infrastructure-as-code tools.
  • Familiarity with workflow orchestration tools.
  • Knowledge of monitoring, alerting, and logging tools for data pipelines.

Analytics & Integration:

  • Understanding of BI integrations and feeding curated datasets to visualization tools (Power BI, Tableau).
  • Ability to support data science teams with clean, well-modelled data for ML/AI workflows.
  • Experience ensuring compliance with data security, privacy, and governance frameworks.

Soft Skills

  • Excellent communication and stakeholder management skills for cross-functional and global coordination.
  • Strong leadership and mentoring abilities to guide distributed technical teams.
  • Analytical mindset with deep problem-solving and troubleshooting capabilities.
  • Proven ability to work across remote, multi-time zone teams while maintaining delivery excellence.
  • Must be able to communicate with Senior Management and C-level stakeholders.

Preferred (Nice to Have)

  • Exposure to big data frameworks.
  • Experience with Kafka Streams / KSQL and schema evolution strategies.
  • Familiarity with machine learning data pipelines and model serving.
  • Hands-on experience implementing data catalogues and metadata management tools.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Intersoft Data Labs logo
Intersoft Data Labs

Information Technology

Tech City

RecommendedJobs for You