Home
Jobs

Data Engineer (Palantir Foundry and PySpark)

6 years

0 Lacs

Posted:6 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

The Data Engineer will build and maintain data pipelines and workflows that support ML, BI, analytics, and software products. This individual will work closely with data scientists, engineers, analysts, software developers, and SMEs within the business to deliver new and exciting products and services. The main objectives are to develop data pipelines and fully automated workflows. The primary platform used will be Palantir Foundry.


Responsibilities:

• Develop high-quality code for the core data stack, including a data integration hub, warehouse, and pipelines.

• Build data flows for data acquisition, aggregation, and modeling, using both batch and streaming paradigms

• Empower data scientists and data analysts to be as self-sufficient as possible by building core systems and developing reusable library code

• Support and optimize data tools and associated cloud environments for consumption by downstream systems, data analysts, and data scientists

• Ensure code, configuration, and other technology artifacts are delivered within agreed time schedules, and any potential delays are escalated in advance

• Collaborate across developers as part of a SCRUM team, ensuring collective team productivity

• Participate in peer reviews and QA processes to drive higher quality

• Ensure that 100% of the code is well documented and maintained in the source code repository.

• Strive for engineering excellence by simplifying, optimizing, and automating processes and workflows.

• Ensures their workstation and all processes and procedures follow organization standards


Experience And Skills:

• Minimum of 6 years of professional experience as a data engineer

• Hands-on experience with Palantir Foundry

• Experience with relational and dimensional database modelling (Relational, Kimball, or Data Vault)

• Proven experience with aspects of the Data Pipeline (Data Sourcing, Transformations, Data Quality, etc.)

• Bachelor’s or Master’s in Computer Science, Information Systems, or an engineering field

• Prefer experience with event-driven architectures and data streaming pub/sub technologies such as IBM MQ, Kafka, or Amazon Kinesis.

• Strong capabilities in Python, SQL, and stored procedures; interpersonal, communication, problem-solving, and critical thinking skills; agile/Scrum experience.

• Prefer travel, transportation, or hospitality experience, especially with fleet management and vehicle maintenance.

• Prefer experience with designing application data models for mobile or web applications

Mock Interview

Practice Video Interview with JobPe AI

Start Data Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You