Python Engineer — Backend & Data Aggregations

4 years

0 Lacs

Posted:22 hours ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

Contract duration :

Role Overview :

We’re looking for a Python Engineer (2–4 years) who’s strong in backend development and has hands-on experience implementing aggregation and data computation use cases — such as device-level rollups, metrics computation, time-based summaries, or multi-source joins.

You’ll work closely with platform, data, and product teams to design efficient aggregation logic and APIs that serve real-time and historical analytics, and you’ll help make Condense’s data platform more intelligent and scalable.


Key Responsibilities :

· Design and implement data aggregation logic for device, customer, or time-window-based metrics using Python.

· Build clean, maintainable backend services or microservices that perform aggregations and expose results through APIs or data sinks.

· Work with internal teams to translate business or analytics needs into efficient aggregation pipelines.

· Optimize data handling — caching, indexing, and computation efficiency for large-scale telemetry data.

· Collaborate with DevOps and data teams to integrate with databases, message queues, or streaming systems.

· Write high-quality, tested, and observable code — ensuring performance and reliability in production.

· Contribute to design discussions, reviews, and documentation across backend and data infrastructure components.


Required Qualifications :

· 2–4 years of professional experience as a Python Developer / Backend Engineer.

· Strong proficiency with Python (async programming, data structures, I/O, concurrency).

· Experience with data aggregation, metrics computation, or analytics workflows (batch or incremental).

· Sound understanding of REST APIs, microservice architecture, and database design (SQL/NoSQL).

· Familiarity with cloud-native development and containerized deployment (Docker, Kubernetes).

· Hands-on with data access and transformation using libraries like pandas, SQLAlchemy, or FastAPI/Flask for backend services.

· Excellent debugging, profiling, and optimization skills.


Good to Have :

· Exposure to real-time data pipelines (Kafka, Kinesis, Pulsar, etc.) or streaming frameworks (Kafka Streams, ksqlDB, Faust).

· Experience with time-series databases or analytics stores (ClickHouse, Timescale, Druid, etc.).

· Understanding of event-driven or stateful aggregation patterns (tumbling/sliding windows, deduplication).

· Familiarity with CI/CD, observability tools (Prometheus, Grafana), and monitoring best practices.

· Experience working in IoT, mobility, or telemetry-heavy product environments.


What Success Looks Like :

· You deliver robust, scalable aggregation logic that enables downstream analytics and dashboards.

· Code is clean, performant, and maintainable, following engineering best practices.

· Aggregation jobs and APIs are well-monitored and observable, enabling smooth production operation.

· You work effectively across teams — platform, DevOps, and product — to deliver data-backed insights faster.

· You continuously learn and adopt best practices from the Python and data engineering ecosystem

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You