Data Engineer - Palantir Foundry

7 - 12 years

0 Lacs

Posted:6 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

The Core Responsibilities For The Job Include The Following

Palantir Data Engineering and Analytics;

  • Design, develop, and maintain scalable, modular data pipelines using Foundry Pipeline Builder (visual and code-based).
  • Ingest, integrate, and process data from diverse sources (S3 RDBMS, REST APIs, flat files, etc. ).
  • Implement advanced data processing techniques: incremental processing, anomaly detection, geospatial transformations, and time series analysis using PySpark, Python, and Foundry's no-code tools.
  • Parse and process various data formats (CSV, JSON, XML, Parquet, Avro) within Foundry.
  • Reuse and modularize functions and parameters for efficient, maintainable pipeline management.
  • Leverage LLMs for translation, classification, and data enrichment in pipelines via Palantir AIP Logic.

Ontology And Schema Management

  • Create and manage ontologies using Ontology Manager and Object Explorer to model business entities, relationships, and data lineage.
  • Implement property-level and object-level access controls for secure data modeling and compliance.

Data Quality, Validation, And Monitoring

  • Design and implement Master Data Management (MDM) and Reference Data Management solutions to ensure consistency and accuracy of key business entities across the organization.
  • Lead efforts in entity resolution, de-duplication, and golden record creation within Palantir or integrated MDM platforms.
  • Implement data validation, health checks, and monitoring for production-grade reliability.
  • Ensure data integrity, quality, and consistency across all stages of the data lifecycle.
  • Set up automated alerts and dashboards for pipeline health and anomaly detection.

Data Security And Governance

  • Enforce data privacy, security, and compliance standards (RBAC, audit logs, data masking) within Palantir and cloud environments.
  • Document data lineage, transformations, and access controls for auditability and governance.

Collaboration And Best Practices

  • Work closely with business analysts, data scientists, and product owners to translate requirements into robust data solutions.
  • Mentor junior engineers and analysts, contribute to code reviews, and champion best practices.
  • Document technical designs, workflows, and user guides for maintainability and knowledge transfer.

Data Analysis And Visualization

  • Perform data profiling, cleaning, joining, and enrichment to support business decision-making.
  • Conduct statistical and exploratory analysis to uncover trends, patterns, and actionable insights.

Dashboarding And Reporting

  • Develop and manage interactive dashboards and reports using Palantir Contour, Quiver, and other BI tools (Tableau, Power BI, Looker).
  • Build pivot tables, advanced reporting, and custom visualizations tailored to business needs.
  • Leverage Palantir's visualization modules for real-time and historical data analysis.

Cloud Platform Integration

  • Integrate AWS, Azure, or GCP data engineering and analytics services (Glue, Data Factory, BigQuery, Redshift, Synapse, etc. ) with Palantir workflows.
  • Design and implement end-to-end data pipelines that bridge Palantir and cloud-native ecosystems.

API And Microservices Integration

  • Develop and consume RESTful APIs, GraphQL endpoints, and microservices for scalable, modular data architectures.

DevOps And Best Practices

  • Implement CI/CD pipelines for data pipeline deployment and updates (Foundry, GitHub Actions, Jenkins, etc. ).
  • Apply containerization (Docker) and orchestration (Kubernetes) for scalable data processing.

Agile Collaboration

  • Work in Agile/Scrum teams, participate in sprint planning, and contribute to continuous improvement.

Requirements

  • Experience: 7-12 years in data engineering, data analysis, or related roles; 1-2 years on Palantir Foundry, Pipeline Builder, Contour, Quiver, or strong experience with AWS/Azure/GCP data engineering and analytics services.
  • Bachelor's or Master's degree in Computer Science, Engineering, Information Systems, Mathematics, or a related field.

Certifications (Preferred But Not Mandatory)

  • Palantir Foundry Data Engineer/Analyst Certification.
  • AWS/Azure/GCP Data Engineering or Analytics Certifications.
  • Relevant BI/Visualization tool certifications.

Palantir Platform

  • Data pipeline development, transformation, and cleaning (Pipeline Builder, Code Workspaces).
  • Ontology creation, management, and data lineage (Ontology Manager, Object Explorer).
  • Data validation, health checks, and monitoring in production pipelines.
  • Data security, RBAC, audit logging, and compliance within Palantir.
  • Dashboarding and visualization (Contour, Quiver).
  • LLM integration for data enrichment (AIP Logic).

Data Engineering

  • Proficiency in SQL and Python; experience with PySpark is highly desirable.
  • Experience with data ingestion, integration, aggregation, and transformation from multiple sources.
  • Geospatial data processing, time series analysis, and anomaly detection.
  • Parsing and processing structured, semi-structured, and unstructured data.

Data Analysis

  • Data profiling, cleaning, joining, and enrichment.
  • Exploratory and statistical analysis.
  • Dashboarding, reporting, and advanced visualization (Contour, Quiver, Tableau, Power BI).

Cloud Platforms

  • Hands-on experience with AWS, Azure, or GCP data engineering and analytics services.
  • Integration of cloud services with Palantir workflows.

General Skills

  • Strong analytical, problem-solving, and communication skills.
  • Experience working in Agile/Scrum environments.
  • Ability to mentor and guide junior engineers and analysts.

Preferred Skills

  • Experience with Palantir's Pipeline Builder, Code Workspaces, and advanced data transformation modules.
  • Exposure to LLMs for data translation and enrichment.
  • Familiarity with data governance, security, and compliance best practices.
  • Prior experience in industries such as finance, healthcare, manufacturing, or government.
  • Experience with open-source data engineering tools (dbt, Great Expectations, Delta Lake, Iceberg).
  • Knowledge of CI/CD, DevOps, and automation tools.
This job was posted by Veronica A from Hashedin by Deloitte.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You