Lead AI Platform Engineer

8 - 13 years

15 - 19 Lacs

Posted:Just now| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Ecolab is looking for a Data Engineer to be part of a dynamic team that's at the forefront of technological innovation. We're leveraging cutting-edge AI to create novel solutions that optimize operations for our clients, particularly within the restaurant industry. Our work is transforming how restaurants operate, making them more efficient and sustainable.

As a key player in our new division, you'll have the unique opportunity to help shape its culture and direction. Your contributions will directly impact the success of our innovative projects and help define the future of our product offerings.Additionally, you will experience the best of both worlds with this team at Ecolab: the agility and creativity of a startup paired with the stability and resources of a global leader. Our collaborative environment fosters innovation while providing the support and security you need to thrive.

Responsibilities

  • Design, develop, and maintain scalable and robust data pipelines on Databricks (Spark SQL, PySpark, Delta Lake).
  • Collaborate with data scientists and analysts to understand data requirements and deliver solutions.
  • Optimize and troubleshoot existing data pipelines for performance and reliability.
  • Ensure data quality and integrity across various data sources.
  • Implement data security and compliance best practices.
  • Monitor data pipeline performance, implement data quality checks, and conduct necessary maintenance and updates.
  • Document data pipeline processes and technical specifications.
  • Implement robust pipeline orchestration using tools like Databricks Workflows, dbt, or similar.
  • Generate and maintain data quality reports and dashboards.
  • Implement Infrastructure as Code (IaC) principles for managing Databricks infrastructure.

Minimum Qualifications

  • Bachelors degree and 8 years work experience; or no degree and 12 years combined education and equivalent work experience.
  • 3 years of experience (work or educational) with a data engineering focus.
  • Proven experience in Databricks (Delta Lake, Workflows, Asset Bundles).
  • Proven experience in distributed data processing technologies (Spark SQL, PySpark).
  • Strong knowledge in designing and developing ETL pipelines.
  • Experience with data quality monitoring and reporting.
  • Experience working in a collaborative environment with data scientists and software engineers.

Preferred Qualifications

  • Masters degree (MS) in Computer Science or related engineering field.
  • Proficiency in Databricks (Delta Lake, Workflows, Asset Bundles).
  • Proficiency in distributed data processing technologies (Spark SQL, PySpark).
  • Experience with pipeline orchestration tools (Databricks Workflows, dbt, etc.).
  • Experience with data visualization tools (e.g., Tableau, Power BI).
  • Working experience with machine learning platforms and tools.
  • Experience with real-time data streaming technologies (e.g., Kafka, Kinesis).

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You