Databricks Engineer (PySpark Developer)

1 - 4 years

9 - 16 Lacs

Posted:1 week ago| Platform: Naukri logo

Apply

Work Mode

Remote

Job Type

Full Time

Job Description

Job Title:

Databricks Engineer (PySpark Developer)

Employment Type:

Full-Time | Permanent | Remote (Work From Home)

Industry:

IT Services & Consulting | Software Development | Data Engineering | Cloud Services

Functional Area:

Data Engineering | Big Data | Cloud Platforms | Analytics

About Oblytech:

Oblytech is a fast-growing IT consulting and software services firm, specializing in delivering cutting-edge IT solutions to clients across the United States, Canada, and Australia. We are an official Salesforce Partner and ServiceNow consulting provider, with expertise spanning ServiceNow, Salesforce, cloud platforms (AWS, Azure, Google Cloud), custom application development, AI/ML integrations, and offshore IT staff augmentation.

Our clients rely on us to solve critical IT challenges around scalability, cost optimization, and digital transformation. As we expand into advanced data engineering and analytics services, we are looking to onboard skilled professionals who can architect and deliver solutions leveraging modern big data and cloud platforms.

Databricks Engineer (PySpark Developer)

You will work with global clients to design and implement scalable data engineering solutions that support business intelligence, machine learning, and advanced analytics use cases. This role requires a mix of technical proficiency, problem-solving ability, and communication skills to collaborate with cross-functional teams across geographies.

Key Responsibilities:

  • Design, develop, and optimize

    data pipelines and ETL workflows

    using

    Databricks (PySpark, Spark SQL, Delta Lake)

    .
  • Work with structured, semi-structured, and unstructured data to build scalable big data solutions.
  • Integrate Databricks with

    cloud platforms

    (AWS S3, Azure Data Lake, GCP Storage) for data ingestion, transformation, and analytics.
  • Implement and optimize

    Delta Lake

    for data versioning, ACID transactions, and scalable storage.
  • Collaborate with business analysts, data scientists, and product teams to deliver data-driven insights.
  • Ensure performance tuning, monitoring, and troubleshooting of Spark jobs and pipelines.
  • Build and maintain

    CI/CD pipelines

    for Databricks deployments using DevOps tools (Azure DevOps, GitHub Actions, Jenkins).
  • Document workflows, maintain code repositories, and adhere to best practices in version control and data governance.
  • Participate in client discussions to understand requirements, propose solutions, and ensure smooth project delivery.

Skills and Experience Required:

  • 2 to 5 years

    of professional experience in

    data engineering or big data development

    .
  • Strong hands-on expertise in

    Databricks

    (workspace, clusters, notebooks, jobs).
  • Proficiency in

    PySpark

    , Spark SQL, and performance tuning of Spark jobs.
  • Experience with

    Delta Lake

    for scalable storage and data consistency.
  • Familiarity with at least one major

    cloud platform (Azure/AWS/GCP)

    , including data services (Azure Data Factory, AWS Glue, GCP Dataflow).
  • Good understanding of

    data warehousing, ETL concepts, and data modeling

    .
  • Experience with

    Git, CI/CD pipelines, and DevOps practices

    .
  • Strong English communication skills to work with international teams and clients.
  • Exposure to BI/analytics tools (Power BI, Tableau) or ML workflows (MLflow, Databricks ML) is a plus.

What We Offer:

  • Competitive salary package with performance-based incentives.
  • Opportunity to work on global

    big data and cloud transformation projects

    .
  • Remote work flexibility from anywhere in India.
  • Growth path into

    Senior Data Engineer, Solution Architect, or Cloud Data Specialist

    roles.
  • Direct exposure to U.S., Canadian, and Australian clients.
  • Mentorship from senior architects and leadership in data engineering and cloud services.

Technologies & Platforms You Will Work With:

  • Databricks (PySpark, Delta Lake, Spark SQL)

  • Cloud Platforms

    : Azure, AWS, or Google Cloud
  • Data Services

    : ADF, AWS Glue, GCP Dataflow
  • DevOps Tools

    : Azure DevOps, Jenkins, GitHub Actions
  • BI/ML Tools

    : Power BI, Tableau, MLflow

Ideal Candidate Profile:

  • Proven experience in

    Databricks and PySpark-based development

    .
  • Strong problem-solving skills and the ability to design scalable solutions.
  • Familiarity with

    data lakehouse architecture

    and modern ETL pipelines.
  • Comfort working in a

    remote, international environment

    .
  • Keen interest in learning new tools and adapting to evolving data technologies.

Location:

Remote (India-based) Work From Home

Working Hours:

Flexible, with partial overlap to U.S. and/or Australian time zones preferred.

Compensation:

Fixed Salary + Performance-Based Incentives

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
Oblytech logo
Oblytech

IT Services and IT Consulting

HYDERABAD TELANGANA

RecommendedJobs for You

bangalore rural, bengaluru

bangalore rural, bengaluru

bengaluru, karnataka, india

gurugram, haryana, india

hyderabad/secunderabad, bangalore/bengaluru, delhi / ncr