Data Engineer – Python / PySpark

3 - 10 years

0 Lacs

Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

Job Title: Data Engineer – Python / PySpark


Number of Position:


Experience:


Location:


Work Model:


Job Type:


Client Domain:


About the Role

We are seeking a highly skilled Data Engineer with hands-on expertise in Python, PySpark and BigData. The ideal candidate will have experience working on large-scale data processing, transformation, and analytics solutions within the utilities or related domains. You will collaborate closely with business stakeholders, solution architects, and data analysts to design and deliver efficient, scalable, and high-quality data pipelines.


Key Responsibilities

  • Design, build, and maintain scalable ETL pipelines using Python and PySpark.
  • Ensure data quality, reliability, and governance across systems and pipelines.
  • Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.
  • Perform performance tuning and troubleshooting of big data applications.
  • Work with large datasets from multiple sources (structured/unstructured) and prepare them for analytics and reporting.
  • Follow best practices in coding, testing, and deployment for enterprise-grade data applications.


Required Skills & Qualifications

  • 3–10 years of professional experience in data engineering, preferably in utility, energy, or similar industries.
  • Strong proficiency in Python programming.
  • Hands-on experience with PySpark for big data processing.
  • Good understanding and working knowledge of Palantir Foundry (pipelines, ontology, data modeling, transforms, code repositories, data integration and transformation).
  • Experience with SQL and handling large datasets.
  • Familiarity with data governance, data security, and compliance requirements in enterprise environments.
  • Strong problem-solving and analytical skills.
  • Excellent communication and collaboration skills.


Nice-to-Have Skills

  • Experience with AWS, Azure, or GCP cloud data services.
  • Knowledge of utilities domain data models and workflows.
  • Exposure to DevOps / CI-CD pipelines for data engineering solutions.
  • Knowledge of visualization tools like Tableau, Power BI, or Palantir dashboards.
  • Experience in Palantir Foundry.


Why Join Us

  • Work on cutting-edge Palantir Foundry-based solutions in the utilities sector.
  • Be part of a dynamic, collaborative, and innovation-driven team.
  • Opportunity to grow your technical expertise across modern data platforms.


Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You