Senior Data Engineer Palantir

7 - 10 years

20 - 25 Lacs

Posted:None| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Senior Data Engineer

Key Responsibilities:

  • Architect, build, and optimize large-scale

    ETL/ELT pipelines

    using

    Palantir Foundry/Gotham

    .
  • Integrate diverse data sources including

    SQL

    ,

    NoSQL

    ,

    APIs

    , and

    real-time streaming platforms

    (e.g., Kafka) into Palantirs

    ontology-driven models

    .
  • Develop and customize

    ontologies

    ,

    transforms

    , and

    operational workflows

    within Palantir to support business intelligence and analytics.
  • Implement scalable, distributed processing using frameworks like

    Apache Spark

    ,

    Hadoop

    , and other

    big data technologies

    for handling

    petabyte-scale datasets

    .
  • Write efficient, production-grade

    Python scripts

    for data transformation, automation, workflow orchestration, and custom business logic.
  • Optimize data workflows and platform performance through

    query tuning

    ,

    caching

    ,

    partitioning

    , and

    incremental data updates

    .
  • Ensure robust

    data governance

    ,

    lineage tracking

    , and

    enterprise-grade security

    (e.g., RBAC, encryption).
  • Collaborate with

    DevOps and platform engineering teams

    to implement and maintain

    CI/CD pipelines

    and

    automated, scalable deployment processes

    .
  • Maintain and version-control data assets and transformations using

    Git

    and DevOps best practices.
  • Partner with data scientists, analysts, and stakeholders to translate complex business requirements into scalable data engineering solutions.
  • Create and maintain comprehensive

    technical documentation

    , including data models, architecture diagrams, deployment workflows, and operational guides.

Required Skills:

  • Proven expertise in

    Palantir Foundry or Gotham

    , with deep knowledge of its ontology framework, code workbooks, and data pipeline architecture.
  • Strong programming experience in

    Python

    and advanced

    SQL

    .
  • Hands-on experience with

    ETL/ELT design

    , data transformation, and workflow automation.
  • Proficiency with

    Apache Spark

    ,

    Hadoop

    , or similar distributed data processing frameworks.
  • Experience integrating data from

    relational and non-relational databases

    ,

    streaming sources

    , and

    external APIs

    .
  • Understanding of

    data modeling

    ,

    ontology-driven design

    , and

    semantic layers

    .
  • Solid grasp of

    CI/CD

    ,

    DevOps practices

    , and deployment automation tools.
  • Familiarity with

    cloud platforms

    (AWS, Azure, or GCP),

    containerization

    (Docker), and

    orchestration tools

    (Kubernetes, Airflow).

Nice to Have:

  • Experience with

    Foundry Ontology SDK

    , Object Explorer, and Python transforms in Palantir.
  • Exposure to

    data cataloging

    ,

    metadata management

    , and

    monitoring frameworks

    .
  • Understanding of

    ML/AI pipeline integration

    within data platforms.
  • Certifications in

    Palantir

    ,

    cloud platforms

    , or

    big data technologies.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

noida, hyderabad, bengaluru