Data Engineer - Databricks

3 - 4 years

5.0 - 9.0 Lacs P.A.

Mumbai

Posted:2 months ago| Platform: Naukri logo

Apply Now

Skills Required

Computer scienceAutomationGITdata securityAnalyticalData qualityApacheInformation technologySQLPython

Work Mode

Work from Office

Job Type

Full Time

Job Description

We are looking for a talented and driven Databricks Data Engineer with 3-4 years of experience to join our growing data engineering team. The ideal candidate will have hands-on experience working with Databricks, Apache Spark, and cloud technologies, and will be passionate about designing and implementing data pipelines and optimizing workflows. This role will give you the opportunity to work on cutting-edge data solutions while collaborating with data scientists, analysts, and business stakeholders. Key Responsibilities: Develop and maintain data pipelines using Databricks and Apache Spark to manage large datasets. Implement efficient ETL workflows and integrate data from multiple sources into cloud-based environments (AWS, Azure, or GCP). Optimize data processing and workflows within Databricks for performance, scalability, and reliability. Collaborate with data teams to ensure the availability and accessibility of high-quality data. Work with stakeholders to translate business requirements into technical solutions and provide actionable data insights. Ensure data quality, governance, and security across all data platforms. Monitor and troubleshoot data pipeline performance, providing continuous improvements and automation. Document solutions and processes to maintain clarity, consistency, and knowledge sharing within the team. Qualifications: 3-4 years of experience in data engineering or a related role. Hands-on experience with Databricks and Apache Spark for building and optimizing data pipelines. Proficiency in Python , Scala , or SQL for data processing and automation. Experience working with cloud platforms such as AWS , Azure , or GCP . Knowledge of ETL processes , data integration, and data warehousing. Experience with SQL and NoSQL databases for querying and managing large datasets. Familiarity with CI/CD practices and tools such as Git , Jenkins , or Docker . Strong analytical and problem-solving skills with the ability to troubleshoot data pipeline issues. Preferred Qualifications: Experience with Databricks Delta Lake or MLflow . Familiarity with streaming data and real-time processing using Apache Kafka or Spark Streaming . Experience with data visualization tools (e.g., Power BI , Tableau ). Understanding of data security and governance practices. Bachelor s degree in Computer Science , Information Technology , or a related field

Exponentia Team
Exponentia Team

Business Consulting and Services

Mumbai Maharashtra +

201-500 Employees

45 Jobs

    Key People

  • Raghavendra Rao

    CEO

RecommendedJobs for You