Jobs
Interviews

64 Unity Catalog Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 15.0 years

8 - 18 Lacs

Kochi

Remote

10 yrs of exp working in cloud-native data (Azure Preferred),Databricks, SQL,PySpark, migrating from Hive Metastore to Unity Catalog, Unity Catalog, implementing Row-Level Security (RLS), metadata-driven ETL design patterns,Databricks certifications

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Bengaluru

Remote

Detailed job description - Skill Set: Strong Knowledge in Databricks. This includes creating scalable ETL (Extract, Transform, Load) processes, data lakes Strong knowledge in Python and SQL Strong experience with AWS cloud platforms is a must Good understanding of data modeling principles and data warehousing concepts Strong knowledge of optimizing ETL jobs, batch processing jobs to ensure high performance and efficiency Implementing data quality checks, monitoring data pipelines, and ensuring data consistency and security Hands on experience with Databricks features like Unity Catalog Mandatory Skills Databricks, AWS

Posted 1 month ago

Apply

7.0 - 12.0 years

27 - 35 Lacs

Kolkata, Hyderabad, Bengaluru

Work from Office

Band 4c & 4D Skill set -Unity Catalog + Python , Spark , Kafka Inviting applications for the role of Lead Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog. Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications Bachelors degree in Computer Science, Data Engineering, or a related field. experience in data engineering with a focus on Databricks development. Proven expertise in Databricks, Unity Catalog, and data lake management. Strong programming skills in Python for data processing and automation. Experience with Apache Spark for distributed data processing and optimization. Hands-on experience with Apache Kafka for data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of data governance, data security, and data quality frameworks. Excellent communication skills and the ability to work in a cross-functional environ Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes.

Posted 1 month ago

Apply

7.0 - 12.0 years

15 - 22 Lacs

Bengaluru

Hybrid

Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Greetings from LTIMindtree! Job Description Notice Period:- 0 to 30 Days only Experience:- 5 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.

Posted 2 months ago

Apply

6.0 - 11.0 years

18 - 33 Lacs

Bengaluru

Remote

Role & responsibilities Mandatory skills: ADB AND UNITY CATALOG Job Summary: We are looking for a skilled StData Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills. Preferred candidate profile

Posted 2 months ago

Apply

5.0 - 8.0 years

6 - 24 Lacs

Hyderabad

Work from Office

Notice 30 to 45 days. * Design, develop & maintain data pipelines using PySpark, Databricks ,Unity Catalog & cloud. * Collaborate with cross-functional teams on ETL processes & report development. Share resume : garima.arora@anetcorp.com

Posted 2 months ago

Apply

12.0 - 20.0 years

22 - 37 Lacs

Bengaluru

Hybrid

12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com

Posted 2 months ago

Apply

8.0 - 13.0 years

10 - 20 Lacs

Hyderabad, Pune

Work from Office

Job Title: Databricks Administrator Client: Wipro Employer: Advent Global Solutions Location: Hyderabad / Pune Work Mode: Hybrid Experience: 8+ Years (8 Years Relevant in Databricks Administration) CTC: 22.8 LPA Notice Period: Immediate Joiners to 15 Days Shift: General Shift Education Preferred: B.Tech / M.Tech / MCA / B.Sc (Computer Science) Key words:- Databricks Administration Unity Catalog Cluster Creation, Tuning & ADministration RBAC in Unity Catalog Cloud Administration in GCP preferably else ok with AWS/Azure Knowledge Databricks on 80% and cloud on 20% Mandatory Skills Databricks Admin on GCP/AWS Job Description: • Responsibilities will include designing, implementing, and maintaining the Databricks platform, and providing operational support. Operational support responsibilities include platform set-up and configuration, workspace administration, resource monitoring, providing technical support to data engineering, Data Science/ML, and Application/integration teams, performing restores/recoveries, troubleshooting service issues, determining the root causes of issues, and resolving issues. • The position will also involve the management of security and changes. • The position will work closely with the Team Lead, other Databricks Administrators, System Administrators, and Data Engineers/Scientists/Architects/Modelers/Analysts. Responsibilities: • Responsible for the administration, configuration, and optimization of the Databricks platform to enable data analytics, machine learning, and data engineering activities within the organization. • Collaborate with the data engineering team to ingest, transform, and orchestrate data. • Manage privileges over the entire Databricks account, as well as at the workspace level, Unity Catalog level and SQL warehouse level. • Create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions. • Install, configure, and maintain Databricks clusters and workspaces. • Maintain Platform currency with security, compliance, and patching best practices. • Monitor and manage cluster performance, resource utilization, platform costs, and troubleshoot issues to ensure optimal performance. • Implement and manage access controls and security policies to protect sensitive data. • Manage schema data with Unity Catalog - create, configure, catalog, external storage, and access permissions. • Administer interfaces with Google Cloud Platform. Required Skills: • 3+ years of production support of the Databricks platform Preferred: • 2+ years of experience of AWS/Azure/GCP PaaS admin • 2+ years of experience in automation frameworks such as Terraform Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply

5 - 10 years

16 - 27 Lacs

Pune, Chennai, Bengaluru

Hybrid

If interested pls share the below details on PriyaM4@hexaware.com Total Exp CTC ECTC NP Loc MUST Have skill- Unity Catalog We are looking for a skilled Sr Data Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills.

Posted 2 months ago

Apply

8 - 10 years

11 - 21 Lacs

Noida, Mumbai (All Areas)

Work from Office

As the Full Stack Developer within the Data and Analytics team, you will be responsible for delivery of innovative data and analytics solutions, ensuring Al Futtaim Business stays at the forefront of technical development.

Posted 2 months ago

Apply

6 - 9 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Sharing the JD for your reference : Experience : 6-10+ yrs Primary skills set : Azure Databricks , ADF SQL , Unity CATALOG, Pyspark/Python Kindly, share the following details : Updated CV Relevant Skills Total Experience Current CTC Expected CTC Notice Period Current Location Preferred Location

Posted 2 months ago

Apply

8 - 12 years

13 - 18 Lacs

Bengaluru

Work from Office

Role & responsibilities Job Summary: We are seeking a highly skilled and motivated Data Governance Executor to join our team. The ideal candidate will be responsible for implementing the data governance frameworks focus on data governance solution using Unity Catalog and Azure Purview. This role will ensure implementation of data quality standardization, Data Classification, and Data Governance Polices execution. Key Responsibilities: Data Governance Solution Implementation: Develop and implement data governance policies and procedures using Unity Catalog and Azure Purview. Ensure data governance frameworks align with business objectives and regulatory requirements. Data Catalog Management: Manage and maintain the Unity Catalog, ensuring accurate and up-to-date metadata. Oversee the classification and organization of data assets within Azure Purview. Data Quality Assurance: Implement data quality standards with Data Engineer and perform regular audits to ensure data accuracy and integrity. Collaborate with data stewards to resolve data quality issues. Stakeholder Collaboration: Work closely with data owners, stewards, and business stakeholders to understand data needs and requirements. Provide training and support to ensure effective use of data governance tools. Reporting and Documentation: Generate reports on data governance metrics and performance. Maintain comprehensive documentation of data governance processes and policies. Qualifications: Education: Bachelor's degree in Computer Science, Information Systems, or a related field. Master's degree preferred. Experience: Proven experience in data governance, data management, or related roles. 2+ years hands-on experience with Unity Catalog and Azure Purview. Skills: Strong understanding of data governance principles and best practices. Proficiency in data cataloging, metadata management, and data quality assurance. Excellent analytical, problem-solving, and communication skills. Ability to work collaboratively with cross-functional teams. Preferred Qualifications: Certification in data governance or related fields. Experience with other data governance tools and platforms. Knowledge of cloud data platforms and services. Preferred candidate profile

Posted 2 months ago

Apply

8 - 12 years

20 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities As a Cloud Technical Lead- Data, you will get to: Build and maintain data pipelines to enable faster, better, data-informed decision-making through customer enterprise business analytics Collaborate with stakeholders to understand their strategic objectives and identify opportunities to leverage data and data quality Design, develop and maintain large-scale data solutions on Azure cloud platform Implement ETL pipelines using Azure Data Factory, Azure Databricks, and other related services Develop and deploy data models and data warehousing solutions using Azure Synapse Analytics, Azure SQL Database. Optimize performing, robust, and resilient data storage solutions using Azure Blob Storage, Azure Data Lake, Snowflake and other related services Develop and implement data security policies to ensure compliance with industry standards Provide support for data-related issues, and mentor junior data engineers in the team Define and manage data governance policies to ensure data quality and compliance with industry standards Collaborate with data architects, data scientists, developers, and business stakeholders to design data solutions that meet business requirements Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance. Conceptualize and visualize data frameworks. Preferred candidate profile Bachelors degree in computer science, Information Technology, or related field 8+ years of experience in data engineering with 3+ years hands on Databricks (DB) experience. Strong expertise in Microsoft Azure cloud platform and services, particularly Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture and data modeling and database design. Strong programming skills in SQL, Python and Pyspark Experience in Unity catalog & DBT and data governance knowledge. Good to have experience in Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Agile development environment experience applying DEVOPS along with data quality and governance principles. Good leadership skills to guide and mentor the work of less experienced personnel Ability to contribute to continual improvement by suggesting improvements to Architecture or new technologies and mentoring junior employees and being ready to shoulder ad-hoc. Experience with cross-team collaboration, interpersonal skills/relationship building Ability to effectively communicate through presentation, interpersonal, verbal, and written skills.

Posted 2 months ago

Apply
Page 3 of 3
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies