Senior Software Engineer

5 - 10 years

6 - 10 Lacs

Posted:15 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Senior Software Engineer

At Zeta Global, the Data Connectivity POD builds data products that processes large-scale data with advanced architecture built on AWS and managed Apache Iceberg tables. It ingests hundreds of gigabytes daily, processes multi-terabyte data volumes, and supports both batch and streaming pipelines with sub-second processing speeds. This scalable, multi-tenant system ensures high data integrity and cost efficiency while enabling real-time updates and personalized customer experiences driven by advanced AI and proprietary data assets.

As part of the DCon POD team at Zeta, you will be responsible for developing and maintaining the data platforms and products that manages all inbound and outbound data flows. You will design and implement scalable frameworks enabling self-service data pipeline creation, empowering both technical and non-technical users to build robust pipelines efficiently. This role involves building and optimizing services for both batch and real-time streaming data processing, ensuring data availability with minimal latency. People working on this product will gain expertise on building PetaByte-scale Data Lakes, Data Engineering and building data frameworks that operate at enterprise scale.

Senior Software Engineer Microservices and Data Pipelines, driving enterprise-grade data services and. We are looking for hands-on, self-driven people who are ready to explore and learn different technologies and languages to get things done. People who have previous experience with writing high through put micro services, large scale datasets/databases, ETL pipelines, Data engineering, connector frameworks with strong Python, data management, and compliance-minded delivery at scale would be a good fit for this position.

Key Responsibilities

  • Design, implement, and operate Python-based microservices and high-throughput ETL pipelines that meet enterprise SLAs, RPO/RTO targets, and data quality standards
  • Embed observability by default through structured logging, metrics, and distributed tracing; define SLOs, error budgets, and incident response runbooks for production services
  • Conduct design and code reviews, performance and cost optimizations, and challenge fellow engineers to raise the bar on engineering excellence and operational maturity
  • Design and evolve reusable connector frameworks for inbound and outbound integrations with Ad tech platforms like Facebook Ads, Google Ads, and other enterprise systems, emphasizing modularity, versioning, and backward compatibility
  • Collaborate with Product, SecOps, and Platform teams to deliver scalable, cost-efficient solutions across Kubernetes, message buses, and workflow orchestrators

Required Qualifications

  • 5+ years in backend/data engineering building Python services and ETL pipelines for enterprise environments with strict uptime, compliance, and performance requirements
  • Expert Python skills, including async I/O, concurrency, and service frameworks, plus strong testing strategies, and interface contracts for maintainability at scale
  • Proven microservices experience with containers and orchestration (Docker, Kubernetes) and event-driven patterns using message brokers or streaming platforms
  • Robust SQL skills integrating with analytics warehouses such as Snowflake, including cost/performance tuning patterns
  • Production-grade integrations with APIs, including rate-limit handling, retries, backoff policies, and change-data capture or incremental sync designs
  • Deep observability practice using metrics, logs, and traces and experience implementing Open Telemetry instrumentation and alerting workflows

Preferred Qualification

  • Experience in leveraging Apache Spark for efficient processing and management of large-scale distributed data workloads
  • Strong understanding of modern Lakehouse architectures that unify data lakes and warehouses
  • Exposure to serverless data patterns (e.g., AWS Lambda/Step Functions)

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Zeta Global logo
Zeta Global

Non-profit Organizations

Abu Dhabi Amsterdam

RecommendedJobs for You