Service Delivery Lead

5 - 8 years

11.0 - 15.0 Lacs P.A.

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata

Posted:3 months ago| Platform: Naukri logo

Apply Now

Skills Required

githubVersion controlGITCodingDebuggingData processingData qualityResource managementMonitoringPython

Work Mode

Work from Office

Job Type

Full Time

Job Description

Python Programming : Lead the development of robust, scalable, and efficient Python code for data processing, transformation, and analysis within the Databricks environment. Ensure adherence to coding standards, quality, and performance best practices. Azure Databricks Administration : Oversee Databricks platform configuration, resource management, cluster optimization, and monitoring to ensure high availability and performance. Implement best practices for managing Databricks workspaces, libraries, and notebooks. CI/CD for Databricks Artifacts: Design and implement CI/CD pipelines for automating the deployment of Databricks artifacts such as notebooks, libraries, jobs, and Delta tables. Use tools such as Azure DevOps, GitHub Actions, or Jenkins to streamline the deployment process. Delta Live Tables: Design and manage Delta Live Tables pipelines for real-time data processing, ensuring data quality, reliability, and optimal performance in the processing layers. Auto Loader : Implement and optimize Auto Loader for scalable, fault-tolerant ingestion of data from cloud storage sources into Databricks. Unity Catalog : Manage and configure Unity Catalog for centralized governance of data across workspaces, ensuring proper access controls, data lineage, and compliance. Databricks Asset Bundles : Leverage Databricks Asset Bundles to manage and share reusable components, including notebooks, libraries, and ML models across teams or projects. Team Leadership : Mentor and guide junior team members, fostering a collaborative environment that encourages knowledge sharing, innovation, and personal growth. Provide technical leadership and support to ensure the success of the team s initiatives. Collaboration and Communication : Work closely with stakeholders, including data engineers, data scientists, and business analysts, to understand requirements and deliver end-to-end solutions. Communicate technical challenges and solutions clearly to both technical and non-technical stakeholders. Your Profile Experience in Python Programming: Minimum of 5 years of experience in Python programming, with a focus on data engineering and data science workflows. Expertise in designing and building scalable Python-based ETL pipelines, using libraries such as Pandas, NumPy, PySpark, and requests. Proficiency in debugging, optimizing, and maintaining Python code to ensure high performance and reliability in a distributed environment. Extensive Experience with Azure Databricks: Minimum of 3 years of hands-on experience working with Azure Databricks in a production environment. Strong knowledge of Databricks clusters, workspaces, and the Databricks runtime. Ability to optimize and troubleshoot Spark-based jobs and notebooks, ensuring performance and cost-efficiency in cloud environments. CI/CD for Databricks Artifacts : Proven experience in setting up and managing Continuous Integration/Continuous Deployment (CI/CD) pipelines for Databricks artifacts (notebooks, libraries, and jobs). Familiarity with version control systems (Git), and experience with tools such as Azure DevOps, GitHub Actions, or Jenkins for automated deployment.

ADM

Agriculture & Food Processing

Decatur

39,000 Employees

23 Jobs

    Key People

  • Juan Luciano

    Chairman and CEO
  • Raymond E. Knauss

    CFO

RecommendedJobs for You

Bengaluru, Karnataka, India

Chennai, Pune, Delhi, Mumbai, Bengaluru, Hyderabad, Kolkata