Data Integration & Architecture Engineer

4 years

0 Lacs

Posted:3 weeks ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Location:

Team:

Type:



About Us

RevAi Pro

foundational data architecture



The Role

Data Integration & Architecture Engineer

data engineering and platform architecture



What You’ll Do

  • Build ingestion pipelines

    (CDC, batch, webhook, file-based) from CRMs (Salesforce, Dynamics, HubSpot), billing systems (Stripe, Zuora), and support tools (Zendesk, JSM) into

    Azure Fabric Bronze/Silver layers

    .
  • Curate standardized models

    in Fabric Silver → Gold using Canonical RevAi Schema (CRS), including SCD2 handling, PII tagging, and business-rule conformance.
  • Operate low-latency egress pipelines

    from Fabric → Postgres (staging → ext_* → app consumption), ensuring subsecond query performance for AI agents.
  • Implement tenant isolation

    via RLS in Postgres and Purview policies in Fabric, ensuring strict multi-tenancy and compliance with residency rules.
  • Own observability & SLAs

    : freshness checks, rowcount anomaly detection, and DQ tests across Bronze/Silver/Gold.
  • Contribute to FinOps dashboards

    by instrumenting pipeline cost metrics, cache hit rates, and egress efficiency.
  • Collaborate with AI engineers

    to ensure pgvector embeddings and agent run data integrate seamlessly into the operational data store.
  • Write runbooks for DR/retention/erasure, and actively participate in quarterly pen-test preparations.



What We’re Looking For

Must-Have Skills:

  • 2–4 years in

    data engineering / integration roles

    .
  • Hands-on with

    Azure Data Factory, Azure Fabric (OneLake/Lakehouse), or equivalent (Databricks/Snowflake acceptable)

    .
  • Strong SQL and

    PostgreSQL experience

    (MERGE, staging, RLS, partitioning).
  • Experience with

    CDC, batch ingestion, or event streaming

    (Airflow, dbt, Kafka, or Fabric pipelines).
  • Comfort with

    data governance concepts

    (Purview, PII tagging, masked views, data lineage).

Nice-to-Have Skills:

  • Knowledge of

    pgvector, embeddings, or AI data models

    .
  • Prior exposure to

    multi-tenant SaaS architectures

    .
  • Experience with

    observability tooling

    (freshness checks, data quality gates, anomaly detection).
  • Familiarity with

    FinOps or cost-optimization in data pipelines

    .


Mock Interview

Practice Video Interview with JobPe AI

Start Job-Specific Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You