Home
Jobs

29 Unity Catalog Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

27 - 35 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Naukri logo

Band 4c & 4D Skill set -Unity Catalog + Python , Spark , Kafka Inviting applications for the role of Lead Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog. Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications Bachelors degree in Computer Science, Data Engineering, or a related field. experience in data engineering with a focus on Databricks development. Proven expertise in Databricks, Unity Catalog, and data lake management. Strong programming skills in Python for data processing and automation. Experience with Apache Spark for distributed data processing and optimization. Hands-on experience with Apache Kafka for data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of data governance, data security, and data quality frameworks. Excellent communication skills and the ability to work in a cross-functional environ Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes.

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.

Posted 1 week ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Greetings from LTIMindtree! Job Description Notice Period:- 0 to 30 Days only Experience:- 5 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Bengaluru

Remote

Naukri logo

Role & responsibilities Mandatory skills: ADB AND UNITY CATALOG Job Summary: We are looking for a skilled StData Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills. Preferred candidate profile

Posted 2 weeks ago

Apply

5.0 - 8.0 years

6 - 24 Lacs

Hyderabad

Work from Office

Naukri logo

Notice 30 to 45 days. * Design, develop & maintain data pipelines using PySpark, Databricks ,Unity Catalog & cloud. * Collaborate with cross-functional teams on ETL processes & report development. Share resume : garima.arora@anetcorp.com

Posted 2 weeks ago

Apply

12.0 - 20.0 years

22 - 37 Lacs

Bengaluru

Hybrid

Naukri logo

12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com

Posted 2 weeks ago

Apply

8.0 - 13.0 years

10 - 20 Lacs

Hyderabad, Pune

Work from Office

Naukri logo

Job Title: Databricks Administrator Client: Wipro Employer: Advent Global Solutions Location: Hyderabad / Pune Work Mode: Hybrid Experience: 8+ Years (8 Years Relevant in Databricks Administration) CTC: 22.8 LPA Notice Period: Immediate Joiners to 15 Days Shift: General Shift Education Preferred: B.Tech / M.Tech / MCA / B.Sc (Computer Science) Key words:- Databricks Administration Unity Catalog Cluster Creation, Tuning & ADministration RBAC in Unity Catalog Cloud Administration in GCP preferably else ok with AWS/Azure Knowledge Databricks on 80% and cloud on 20% Mandatory Skills Databricks Admin on GCP/AWS Job Description: • Responsibilities will include designing, implementing, and maintaining the Databricks platform, and providing operational support. Operational support responsibilities include platform set-up and configuration, workspace administration, resource monitoring, providing technical support to data engineering, Data Science/ML, and Application/integration teams, performing restores/recoveries, troubleshooting service issues, determining the root causes of issues, and resolving issues. • The position will also involve the management of security and changes. • The position will work closely with the Team Lead, other Databricks Administrators, System Administrators, and Data Engineers/Scientists/Architects/Modelers/Analysts. Responsibilities: • Responsible for the administration, configuration, and optimization of the Databricks platform to enable data analytics, machine learning, and data engineering activities within the organization. • Collaborate with the data engineering team to ingest, transform, and orchestrate data. • Manage privileges over the entire Databricks account, as well as at the workspace level, Unity Catalog level and SQL warehouse level. • Create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions. • Install, configure, and maintain Databricks clusters and workspaces. • Maintain Platform currency with security, compliance, and patching best practices. • Monitor and manage cluster performance, resource utilization, platform costs, and troubleshoot issues to ensure optimal performance. • Implement and manage access controls and security policies to protect sensitive data. • Manage schema data with Unity Catalog - create, configure, catalog, external storage, and access permissions. • Administer interfaces with Google Cloud Platform. Required Skills: • 3+ years of production support of the Databricks platform Preferred: • 2+ years of experience of AWS/Azure/GCP PaaS admin • 2+ years of experience in automation frameworks such as Terraform Role & responsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

4 - 9 years

20 - 30 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Naukri logo

Band 4c & 4D Skill set -Unity Catalog + Python , Spark , Kafka Inviting applications for the role of Lead Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog. Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications Bachelors degree in Computer Science, Data Engineering, or a related field. experience in data engineering with a focus on Databricks development. Proven expertise in Databricks, Unity Catalog, and data lake management. Strong programming skills in Python for data processing and automation. Experience with Apache Spark for distributed data processing and optimization. Hands-on experience with Apache Kafka for data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of data governance, data security, and data quality frameworks. Excellent communication skills and the ability to work in a cross-functional environ Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes.

Posted 1 month ago

Apply

5 - 10 years

16 - 27 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

If interested pls share the below details on PriyaM4@hexaware.com Total Exp CTC ECTC NP Loc MUST Have skill- Unity Catalog We are looking for a skilled Sr Data Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills.

Posted 1 month ago

Apply

8 - 10 years

11 - 21 Lacs

Noida, Mumbai (All Areas)

Work from Office

Naukri logo

As the Full Stack Developer within the Data and Analytics team, you will be responsible for delivery of innovative data and analytics solutions, ensuring Al Futtaim Business stays at the forefront of technical development.

Posted 1 month ago

Apply

6 - 9 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Sharing the JD for your reference : Experience : 6-10+ yrs Primary skills set : Azure Databricks , ADF SQL , Unity CATALOG, Pyspark/Python Kindly, share the following details : Updated CV Relevant Skills Total Experience Current CTC Expected CTC Notice Period Current Location Preferred Location

Posted 1 month ago

Apply

8 - 12 years

13 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities Job Summary: We are seeking a highly skilled and motivated Data Governance Executor to join our team. The ideal candidate will be responsible for implementing the data governance frameworks focus on data governance solution using Unity Catalog and Azure Purview. This role will ensure implementation of data quality standardization, Data Classification, and Data Governance Polices execution. Key Responsibilities: Data Governance Solution Implementation: Develop and implement data governance policies and procedures using Unity Catalog and Azure Purview. Ensure data governance frameworks align with business objectives and regulatory requirements. Data Catalog Management: Manage and maintain the Unity Catalog, ensuring accurate and up-to-date metadata. Oversee the classification and organization of data assets within Azure Purview. Data Quality Assurance: Implement data quality standards with Data Engineer and perform regular audits to ensure data accuracy and integrity. Collaborate with data stewards to resolve data quality issues. Stakeholder Collaboration: Work closely with data owners, stewards, and business stakeholders to understand data needs and requirements. Provide training and support to ensure effective use of data governance tools. Reporting and Documentation: Generate reports on data governance metrics and performance. Maintain comprehensive documentation of data governance processes and policies. Qualifications: Education: Bachelor's degree in Computer Science, Information Systems, or a related field. Master's degree preferred. Experience: Proven experience in data governance, data management, or related roles. 2+ years hands-on experience with Unity Catalog and Azure Purview. Skills: Strong understanding of data governance principles and best practices. Proficiency in data cataloging, metadata management, and data quality assurance. Excellent analytical, problem-solving, and communication skills. Ability to work collaboratively with cross-functional teams. Preferred Qualifications: Certification in data governance or related fields. Experience with other data governance tools and platforms. Knowledge of cloud data platforms and services. Preferred candidate profile

Posted 1 month ago

Apply

8 - 12 years

20 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Naukri logo

Role & responsibilities As a Cloud Technical Lead- Data, you will get to: Build and maintain data pipelines to enable faster, better, data-informed decision-making through customer enterprise business analytics Collaborate with stakeholders to understand their strategic objectives and identify opportunities to leverage data and data quality Design, develop and maintain large-scale data solutions on Azure cloud platform Implement ETL pipelines using Azure Data Factory, Azure Databricks, and other related services Develop and deploy data models and data warehousing solutions using Azure Synapse Analytics, Azure SQL Database. Optimize performing, robust, and resilient data storage solutions using Azure Blob Storage, Azure Data Lake, Snowflake and other related services Develop and implement data security policies to ensure compliance with industry standards Provide support for data-related issues, and mentor junior data engineers in the team Define and manage data governance policies to ensure data quality and compliance with industry standards Collaborate with data architects, data scientists, developers, and business stakeholders to design data solutions that meet business requirements Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance. Conceptualize and visualize data frameworks. Preferred candidate profile Bachelors degree in computer science, Information Technology, or related field 8+ years of experience in data engineering with 3+ years hands on Databricks (DB) experience. Strong expertise in Microsoft Azure cloud platform and services, particularly Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture and data modeling and database design. Strong programming skills in SQL, Python and Pyspark Experience in Unity catalog & DBT and data governance knowledge. Good to have experience in Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Agile development environment experience applying DEVOPS along with data quality and governance principles. Good leadership skills to guide and mentor the work of less experienced personnel Ability to contribute to continual improvement by suggesting improvements to Architecture or new technologies and mentoring junior employees and being ready to shoulder ad-hoc. Experience with cross-team collaboration, interpersonal skills/relationship building Ability to effectively communicate through presentation, interpersonal, verbal, and written skills.

Posted 1 month ago

Apply

5 - 10 years

15 - 30 Lacs

Bengaluru, Hyderabad, Noida

Work from Office

Naukri logo

Mandatory Skill Azure Databricks, Unity Catalog ,Azure Data Factory, Azure Data Lake, , Pyspark, Python Unity Catalog is mandatory

Posted 2 months ago

Apply

7 - 12 years

9 - 16 Lacs

Chennai, Pune, Mumbai

Work from Office

Naukri logo

Skills : Databricks and AWS, Unity Catalog, Spark, SQL, and Python, big data technologies, data warehousing solutions, CI/CD pipelines and tools like Jenkins or GitLab CI/CD

Posted 2 months ago

Apply

6 - 9 years

8 - 11 Lacs

Pune

Work from Office

Naukri logo

JD for Unity Catalog and Databricks Engineer Role C2H Experience-6 to 9yrs overall, Also experience in Pyspark, Databricks Location-PAN INDIA(except Hyderabad and Chennai as preferred location) NP-Immediate/15 days Hybrid-2 or 3 days WFO Unity Catalog Adoption Data Engineer 3-6 Years, PySpark, Databricks, Unity Catalog JD Design, develop, and maintain efficient and scalable data pipelines using PySpark, Databricks. Utilize Databricks to orchestrate and automate data workflows. Implement and manage data storage solutions using Unity Catalog for effective data governance. Collaborate with data scientists, analysts, and business stakeholders to ensure smooth data flow and integration. Write optimized SQL queries for data extraction, transformation, and analysis. Develop and implement data models and data structures to meet business requirements. Perform data quality checks and ensure that all data is accurate and accessible. Troubleshoot and resolve issues related to data processing, data integration, and performance. Optimize performance and scalability of data systems and workflows. Document technical solutions, processes, and best practices for future reference and collaboration.

Posted 2 months ago

Apply

6 - 10 years

15 - 20 Lacs

Pune, Delhi NCR, Trivandrum

Work from Office

Naukri logo

6 years of experience in Design, development, implementation & maintenance of scalable & efficient data pipelines using Azure Databricks platform. Experience in Databricks Unity catalog, SQL, Pyspark, Python, ETL/ELT, Streaming Technologies Required Candidate profile Performance tuning and optimization for Spark jobs on Azure Databricks, troubleshoot and resolve issues related to data pipelines, clusters, Use Databricks to assemble large, complex data sets

Posted 2 months ago

Apply

8 - 12 years

15 - 25 Lacs

Bengaluru, Kochi, Coimbatore

Work from Office

Naukri logo

Candidates with 8 years of experience in with Azure, Unity Catalog, SQL, Python, Databricks, and ETL. Hands on exposure in Spark for big data processing. Deep understanding of ETL/ELT processes and building pipelines, CI/CD and DevOps practices Required Candidate profile The ideal candidate will have strong leadership experience, hands-on technical skills, and a deep understanding of cloud-based data solutions. Exposure in Autoloader, DLT streaming,

Posted 2 months ago

Apply

12 - 18 years

20 - 35 Lacs

Bengaluru, Kochi, Coimbatore

Work from Office

Naukri logo

Candidates with 12 years of experience in IT with 5+ years as Data Architect. Working experience in the areas of Data Engineering (Azure Data Factory, Azure Synapse, Data Lake, Databricks, Streaming Analytics) DevOps practices for CI/CD pipelines Required Candidate profile Designing & Implementation experience for both ETL/ ELT based solutions, SQL, Spark/Scala & Data Analytics. Working experience Logic Apps, API Management, SOLID design principles & modelling methods

Posted 2 months ago

Apply

5 - 8 years

7 - 10 Lacs

Pune, Noida

Work from Office

Naukri logo

Strong Azure Data Factory, Azure Data Lake, Databricks, Unity Catalog, Pyspark, Python This was in addition to the JD and good to have skills: Understanding current architecture and design principles of Databricks echo system Understanding of Managing and securing data assets in Databricks using Unity Catalog. Understanding Monitoring and observability using Datadog. Understanding Code quality and security analysis with SonarQube. Code walkthrough to explain current Python / Pyspark scripts for data processing and analysis. Understanding Version control with GitHub and automating workflows using GitHub Actions. Understanding financial operations and cost management in cloud environments. Understanding 3rd Party Vendor coordination and escalation matrix (if any) Understanding of existing DevOps for CI/CD pipeline Understanding of current performance

Posted 3 months ago

Apply

5 - 7 years

14 - 16 Lacs

Pune, Bengaluru, Gurgaon

Work from Office

Naukri logo

Job Title: Data/ML Platform Engineer Location: Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office) Notice Period: ImmediateiSource Services is hiring for one of their client for the position of Data/ML Platform Engineer. As a Data Engineer you will be relied on to independently develop and deliver high-quality features for our new ML Platform, refactor and translate our data products and finish various tasks to a high standard. Youll be part of the Data Foundation Team, which focuses on creating and maintaining the Data Platform for Marktplaats. 5 years of hands-on experience in using Python, Spark,Sql. Experienced in AWS Cloud usage and management. Experience with Databricks (Lakehouse, ML, Unity Catalog, MLflow). Experience using various ML models and frameworks such as XGBoost, Lightgbm, Torch. Experience with orchestrators such as Airflow and Kubeflow. Familiarity with containerization and orchestration technologies (e.g., Docker, Kubernetes). Fundamental understanding of Parquet, Delta Lake and other data file formats. Proficiency on an IaC tool such as Terraform, CDK or CloudFormation. Strong written and verbal English communication skill and proficient in communication with non-technical stakeholderst Location - Gurgaon, Pune, Bangalore, Chennai, Bhopal, Jaipur, Hyderabad (Work from office)

Posted 3 months ago

Apply

4 - 9 years

6 - 12 Lacs

Bangalore Rural

Hybrid

Naukri logo

Detailed Job Description: Azure Databricks Data Engineer: Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.

Posted 3 months ago

Apply

4 - 9 years

6 - 12 Lacs

Bengaluru

Hybrid

Naukri logo

Detailed Job Description: Azure Databricks Data Engineer: Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.

Posted 3 months ago

Apply

4 - 9 years

6 - 12 Lacs

Hyderabad

Hybrid

Naukri logo

Detailed Job Description: Azure Databricks Data Engineer: Design, develop, and maintain scalable and efficient data pipelines using Azure Databricks platform. Have work experience in Databricks Unity catalog – Collaborate with data scientists and analysts to integrate machine learning models into production pipelines. – Implement data quality checks and ensure data integrity throughout the data ingestion and transformation processes. – Optimize cluster performance and scalability to handle large volumes of data processing. – Troubleshoot and resolve issues related to data pipelines, clusters, and data processing jobs. – Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions – Conduct performance tuning and optimization for Spark jobs on Azure Databricks. – Provide technical guidance and mentorship to junior data engineers.

Posted 3 months ago

Apply

5 - 10 years

14 - 24 Lacs

Bengaluru

Hybrid

Naukri logo

About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Azure Databricks Engineer Experience: 5 +yrs Skillsets: Azure + Databricks + Unity Catalog + Python+SQL Work Location: Hyderabad/ Pune/Gurgaon/Bangalore Job Description Job Summary: We are seeking a highly skilled Azure Databricks Engineer to join our growing team. In this role, you will be responsible for managing and integrating Databricks services with Azure to build and maintain robust data pipelines. You will play a key role in ensuring our data infrastructure is efficient, scalable, and secure. Responsibilities: Manage Databricks clusters, including provisioning, configuration, and optimization. Manage Databricks users in workspace and services used by them such as notebooks, sql, workflows, mlflow etc. Integrate Azure services like Azure Data Factory, Azure Data Lake Storage, and Azure Synapse Analytics with Databricks for a comprehensive data processing solution. Write high-performance Spark code for data manipulation and analysis. Monitor and troubleshoot Databricks jobs and pipelines for performance and stability. Monitor usage of Databricks workspaces and generate reports. Collaborate with data analysts, data scientists, and software engineers to design and implement data solutions. Stay up-to-date on the latest advancements in Azure and Databricks technologies. Qualifications: Proven experience working with Microsoft Azure and Azure Databricks. Good understanding of data warehousing and data lake concepts. Experience with Apache Spark for large-scale data processing. Proficiency in programming languages like Python (PySpark). Experience with SQL and relational databases. Excellent problem-solving and analytical skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Benefits: Competitive salary and benefits package. Opportunity to work on challenging and impactful projects. Collaborative and supportive work environment. Continuous learning and development opportunities.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies