Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

Job Title:

Data Engineer

Location:

Hyderabad

Experience:

4 - 10 YearsWe are looking for a highly skilled

Data Engineer – Data Platforms

with strong hands-on experience in

PySpark, Spark, SQL, and Python

, to design, build, and optimize data pipelines for enterprise-scale platforms. The ideal candidate should possess a deep understanding of

complex data transformations

,

data modeling

, and

DevOps practices

, with the ability to independently handle technical implementation and business stakeholder communication.

Key Responsibilities

  • Develop, optimize, and maintain data pipelines and ETL processes using PySpark, Spark, and Python.
  • Translate complex business transformation logic into efficient and scalable PySpark/Spark scripts for data loading into Enterprise Data Domain tables and Data Marts.
  • Design and implement data ingestion frameworks from multiple structured and unstructured data sources.
  • Work as an individual contributor, managing the full data lifecycle — from requirement analysis and development to deployment and support.
  • Collaborate with business users, data architects, and analysts to understand requirements and deliver data-driven solutions.
  • Ensure data quality, consistency, and governance across all data layers.
  • Implement CI/CD and DevOps best practices for data workflows, version control, and automation.
  • Optimize job performance and resource utilization in distributed data environments.
  • Troubleshoot and resolve issues in data pipelines and workflows proactively.

Technical Skills

Core Skills:

  • PySpark / Spark (strong hands-on required)
  • SQL – Advanced query writing, performance tuning, and optimization
  • Python – Data processing, scripting, and automation
  • Big Data Ecosystem: Hadoop, Hive, or similar platforms
  • Cloud / On-Prem Experience: Any (Azure / AWS / GCP / On-premise acceptable)

DevOps & Deployment

  • Good understanding of DevOps concepts (CI/CD, version control, automation)
  • Experience with tools such as Git, Jenkins, Azure DevOps, or Airflow
  • Familiarity with containerization (Docker/Kubernetes) preferred

Additional Preferred Skills

  • Knowledge of data modeling and data warehousing concepts
  • Exposure to Delta Lake / Lakehouse architectures
  • Familiarity with data orchestration tools like Airflow / Data Factory / NiFi

Qualification

  • Bachelor’s or Master’s degree in Computer Science, Information Technology, or related discipline.
  • Certifications in Big Data, Cloud, or DevOps are an added advantage.

Soft Skills

  • Strong analytical and problem-solving abilities.
  • Excellent communication and stakeholder management skills.
  • Self-driven and capable of working independently with minimal supervision.
  • Strong ownership mindset and attention to detail.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

pune/pimpri-chinchwad area