Home
Jobs

Databricks Engineer - ETL / Spark

3 - 5 years

8 - 14 Lacs

Posted:2 months ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

A Databricks Engineer is primarily responsible for designing, implementing, and maintaining data engineering solutions using the Databricks platform. Here's a detailed overview of their job profile: Responsibilities : Platform Implementation : - Deploy and configure Databricks clusters and environments based on project requirements. - Set up integrations with data sources, data lakes, and other platforms within the organization's ecosystem. Data Pipeline Development : - Design and develop scalable data pipelines using Databricks, Spark, and related technologies. - Implement ETL (Extract, Transform, Load) processes to ingest, process, and transform data from various sources. Performance Optimization: - Tune and optimize Spark jobs and Databricks clusters for performance, scalability, and cost-efficiency. - Monitor and troubleshoot performance issues, bottlenecks, and resource utilization. Data Modeling and Architecture: - Design data models and schemas to support analytical and reporting requirements. - Collaborate with data architects and analysts to ensure data structures meet business needs. Data Integration and Transformation: - Integrate data from different sources and formats into unified datasets suitable for analysis and reporting. - Implement data transformations and aggregations to prepare data for downstream analytics. Security and Governance: - Implement security policies and access controls within Databricks to protect sensitive data. - Ensure compliance with data governance standards and regulatory requirements. Automation and Orchestration: - Automate deployment, monitoring, and management tasks using Databricks APIs, CLI, and infrastructure-as-code tools. - Orchestrate data workflows and job scheduling to ensure timely execution and reliability. Collaboration and Documentation: - Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver solutions. - Document technical designs, processes, and configurations for knowledge sharing and future reference. Skills : Databricks Expertise: In-depth knowledge of Databricks platform capabilities, Spark internals, and best practices. Big Data Technologies: Proficiency in Apache Spark, Scala/Python programming, and data processing frameworks. Cloud Platforms: Experience with cloud platforms (e.g., AWS, Azure, GCP) and services (e.g., S3, Azure Data Lake, BigQuery). Database Management: Understanding of database systems and SQL for data manipulation and querying. Data Engineering: Strong skills in data modeling, ETL development, and data pipeline optimization. Scripting and Automation: Ability to write scripts and automate tasks using Python, Shell scripting, or similar. Problem-Solving: Analytical mindset and troubleshooting skills to resolve complex technical issues. Communication: Effective communication skills to collaborate with cross-functional teams and stakeholders.

Mock Interview

Practice Video Interview with JobPe AI

Start Spark Interview Now

My Connections Ridgeant

Download Chrome Extension (See your connection in the Ridgeant )

chrome image
Download Now
Ridgeant
Ridgeant

IT Services and IT Consulting

Dover Delaware

51-200 Employees

11 Jobs

    Key People

  • John Doe

    CEO
  • Jane Smith

    CTO

RecommendedJobs for You

Bengaluru / Bangalore, Karnataka, India

Hyderabad / Secunderabad, Telangana, Telangana, India

Noida, Uttar Pradesh, India

Patan - Gujarat, Gujrat, India