0 years

5 - 8 Lacs

Posted:6 days ago| Platform: GlassDoor logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Key Responsibilities

  • Develop and maintain complex SQL queries, functions, procedures, and optimized data models in PostgreSQL.
  • Build interactive dashboards and visualizations using Superset, Metabase, or similar tools.
  • Design and implement ETL pipelines to ingest, transform, and load data from multiple sources.
  • Work with replication setups (logical/physical replication) to ensure high availability and reliability of data.
  • Manage data ingestion workflows, scheduling, and automation using scripting tools or orchestration frameworks.
  • Perform query tuning, indexing strategy, and troubleshoot performance issues in PostgreSQL.
  • Collaborate with business teams to gather requirements and translate them into analytical datasets.
  • Maintain data quality, validation checks, and ensure accuracy across reports and data pipelines.
  • Build reusable data marts, staging layers, and semantic layers for reporting use-cases.

Core Skills Required

  • Strong command of PostgreSQL (query optimization, indexing, CTEs, partitioning, vacuuming, EXPLAIN plans).
  • Hands-on experience with BI tools like:
  • Apache Superset
  • Metabase
  • (Bonus: Power BI / Tableau)
  • Experience in building ETL pipelines using tools or scripts (Python, Airflow, dbt, Cron, custom ETL scripts).
  • Understanding of PostgreSQL replication techniques:
  • Logical replication
  • Physical/streaming replication
  • Replication monitoring & troubleshooting
  • Strong knowledge of data modeling (star/snowflake schema).
  • Ability to validate, transform, clean, and optimize data for analytical consumption.
  • Experience working with APIs, JSON data, or flat-file ingestion (CSV/Parquet).

Good-to-Have Skills

  • Scripting knowledge (Python, Bash) for automation and ETL tasks.
  • Familiarity with cloud data platforms (AWS RDS, Aurora, Redshift, BigQuery, or Azure).
  • Experience with containerization (Docker) or CI/CD pipelines.
  • Understanding of data warehousing concepts and dimensional modeling.
  • Knowledge of version control (Git) and collaborative development workflows.

Ideal Candidate Profile

  • Strong SQL background with deep PostgreSQL experience.
  • Able to design efficient ETL workflows and maintain data replication systems.
  • Comfortable working with business teams to convert requirements into dashboards and models.
  • Can independently handle reporting, data pipeline development, and database optimization.

Job Type: Full-time

Pay: ₹500,000.00 - ₹800,000.00 per year

Work Location: In person

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

noida, uttar pradesh, india

noida, uttar pradesh, india

Pune, Maharashtra, India