Databricks Administrator

3 years

0 Lacs

Posted:4 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

Cloud Databricks Administrator


Key Responsibilities:

  • Monitor hourly and daily Databricks jobs, investigate failures, and implement fixes to minimize downtime.
  • Identify, log, and track defects/bugs through ticketing systems, ensuring timely resolution.
  • Manage Databricks access via Azure AD groups with Admin, Edit, and Read permissions.
  • Provide production support for Databricks environments, including cluster operations, job failures, and notebook troubleshooting.
  • Collaborate with data engineers and platform teams to resolve platform-related incidents and performance bottlenecks.
  • Proactively monitor system health, resource utilization, and performance metrics.
  • Implement and enforce archival/retention policies for Databricks storage to optimize costs and performance.
  • Support CI/CD pipelines (Jenkins, Azure Automation) and automate repetitive operational tasks.
  • Maintain technical documentation, SOPs, and runbooks for Databricks operations.
  • Ensure security compliance with RBAC, MFA, and encryption best practices.


Preferred Qualifications:

  • 3+ years of hands-on experience with Databricks and Apache Spark in production environments.
  • Strong knowledge of Azure (AWS/GCP acceptable) and cloud-native services.
  • Experience in SRE or production support environments with SLAs and ticketing systems (ServiceNow, Jira).
  • Proficiency in Python or Scala for data processing and automation.
  • Familiarity with Power BI/Tableau for building monitoring and cost dashboards.
  • Knowledge of CI/CD tools, version control (Git), and scripting languages (PowerShell, Bash).
  • Understanding of cloud cost optimization and usage tracking.
  • Excellent problem-solving skills, communication, and cross-team collaboration abilities.


Nice to Have:

  • Experience with Databricks Lakehouse/Madelaine architecture.
  • Background in monitoring, logging, and incident response for data platforms.
  • Exposure to Kubernetes, Docker, and Terraform.


Required Skills:

  • Bachelor’s degree in Computer Science, IT, or equivalent professional experience.
  • 5+ years in data engineering, cloud operations, or database administration.
  • Proven ability to troubleshoot, communicate effectively, and collaborate across teams in a fast-paced environment.

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

Noida, Uttar Pradesh, India

Hyderabad, Bangalore Rural, Bengaluru