Home
Jobs

SQL PySpark

3 - 8 years

3 - 6 Lacs

Posted:4 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

We are looking for a skilled SQL PySpark professional with 3 to 8 years of experience to join our team. The ideal candidate will have expertise in developing data pipelines and transforming data using Databricks, Synapse notebooks, and Azure Data Factory.
Roles and Responsibility
  • Collaborate with technical architects and cloud solutions teams to design data pipelines, marts, and reporting solutions.
  • Code, test, and optimize Databricks jobs for efficient data processing and report generation.
  • Set up scalable data pipelines integrating with various data sources and cloud platforms using Databricks.
  • Ensure best practices are followed in terms of code quality, data security, and scalability.
  • Participate in code and design reviews to maintain high development standards.
  • Optimize data querying layers to enhance performance and support analytical requirements.
  • Leverage Databricks to set up scalable data pipelines that integrate with a variety of data sources and cloud platforms.
  • Collaborate with data scientists and analysts to support machine learning workflows and analytic needs.
  • Stay updated with the latest developments in Databricks and associated technologies to drive innovation.
  • Job
  • Proficiency in PySpark or Scala and SQL for data processing tasks.
  • Hands-on experience with Azure Databricks, Delta Lake, Delta Live tables, Auto Loader, and Databricks SQL.
  • Expertise with Azure Data Lake Storage (ADLS) Gen2 for optimized data storage and retrieval.
  • Strong knowledge of data modeling, ETL processes, and data warehousing concepts.
  • Experience with Power BI for dashboarding and reporting is a plus.
  • Familiarity with Azure Synapse for analytics and integration tasks is desirable.
  • Knowledge of Spark Streaming for real-time data stream processing is an advantage.
  • MLOps knowledge for integrating machine learning into production workflows is beneficial.
  • Familiarity with Azure Resource Manager (ARM) templates for infrastructure as code (IaC) practices is preferred.
  • Demonstrated expertise of 4-5 years in developing data ingestion and transformation pipelines using Databricks, Synapse notebooks, and Azure Data Factory.
  • Solid understanding and hands-on experience with Delta tables, Delta Lake, and Azure Data Lake Storage Gen2.
  • Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation.
  • Proficiency in building and optimizing query layers using Databricks SQL.
  • Demonstrated experience integrating Databricks with Azure Synapse, ADLS Gen2, and Power BI for end-to-end analytics solutions.
  • Prior experience in developing, optimizing, and deploying Power BI reports.
  • Familiarity with modern CI/CD practices, especially in the context of Databricks and cloud-native solutions.
  • Mock Interview

    Practice Video Interview with JobPe AI

    Start Azure Data Lake Interview Now
    cta

    Start Your Job Search Today

    Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

    Job Application AI Bot

    Job Application AI Bot

    Apply to 20+ Portals in one click

    Download Now

    Download the Mobile App

    Instantly access job listings, apply easily, and track applications.

    coding practice

    Enhance Your Skills

    Practice coding challenges to boost your skills

    Start Practicing Now
    Apptad
    Apptad

    IT Services and IT Consulting

    Alpharetta Georgia

    1001-5000 Employees

    456 Jobs

      Key People

    • John Doe

      CEO
    • Jane Smith

      CTO

    RecommendedJobs for You