Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
Gurgaon / Gurugram, Haryana, India
On-site
Job Title: Consultant / Senior Consultant - Azure Data Engineering Location: India - Gurgaon preferred Industry: Insurance Analytics & AI Vertical Role Overview: We are seeking a hands-on Consultant / Senior Consultant with strong expertise in Azure-based data engineering to support end-to-end development and delivery of data pipelines for our insurance clients. The ideal candidate will have a deep understanding of Azure Data Factory, ADLS, Databricks (preferably with DLT and Unity Catalog), SQL, and Python and be comfortable working in a dynamic, client-facing environment. This is a key offshore role requiring both technical execution and solution-oriented thinking to support modern data platform initiatives. Collaborate with data scientists, analysts, and stakeholders to gather requirements and define data models that effectively support business requirements Demonstrate decision-making, analytical and problem-solving abilities Strong verbal and written communication skills to manage client discussions Familiar with working on Agile methodologies - daily scrum, sprint planning, backlog refinement Key Responsibilities & Skillsets: o Design and develop scalable and efficient data pipelines using Azure Data Factory (ADF) and Azure Data Lake Storage (ADLS). o Build and maintain Databricks notebooks for data ingestion, transformation, and quality checks, using Python and SQL. o Work with Delta Live Tables (DLT) and Unity Catalog (preferred) to improve pipeline automation, governance, and performance. o Collaborate with data architects, analysts, and onshore teams to translate business requirements into technical specifications. o Troubleshoot data issues, ensure data accuracy, and apply best practices in data engineering and DevOps. o Support the migration of legacy SQL pipelines to modern Python-based frameworks. o Ensure adherence to data security, compliance, and performance standards, especially within insurance domain constraints. o Provide documentation, status updates, and technical insights to stakeholders as required. o Excellent communication skills and stakeholder management Required Skills & Experience: 3-7 years of strong hands-on experience in data engineering with a focus on Azure cloud technologies. Proficient in Azure Data Factory, Databricks, ADLS Gen2, and working knowledge of Unity Catalog. Strong programming skills in both SQL, Python especially within Databricks Notebooks. Pyspark expertise is good to have. Experience in Delta Lake / Delta Live Tables (DLT) is a plus. Good understanding of ETL/ELT concepts, data modeling, and performance tuning. Exposure to Insurance or Financial Services data projects is highly preferred. Strong communication and collaboration skills in an offshore delivery model. Required Skills & Experience: Experience working in Agile/Scrum teams Familiarity with Azure DevOps, Git, and CI/CD practices Certifications in Azure Data Engineering (e.g., DP-203) or Databricks
Posted 2 months ago
10.0 - 18.0 years
15 - 30 Lacs
Pune, Bengaluru
Work from Office
Role & responsibilities AWS with Databricks infra lead Experienced in setting up the Unity Catalog s Setting out how the group is to consume the model serving processes, Developing MLflow routines, Experienced ML models, Have used Gen AI features with guardrails, experimentation, and monitoring
Posted 3 months ago
3.0 - 8.0 years
4 - 9 Lacs
Ahmedabad
Hybrid
Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Data Governance Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Job Summary: We are seeking a highly skilled and motivated Governance Tool Specialist to join our team with 4 years of experience. The candidate will be responsible for the implementation, configuration, and management of our governance tools. This role requires a deep understanding of data governance principles, excellent technical skills, and the ability to work collaboratively with various stakeholders. Optional - Experienced Data Quality Specialist with extensive expertise in using Alex Solutions tools to ensure data accuracy, consistency, and reliability. Proficiency in data profiling, cleansing, validation, and governance. Key Responsibilities: Data Governance: • Implement and configure Alex Solutions governance tools to meet client requirements. • Collaborate with clients to understand their data governance needs and provide tailored solutions. • Provide technical support and troubleshooting for governance tool issues. • Conduct training sessions and workshops to educate clients on the use of governance tools. • Develop and maintain documentation for governance tool configurations and processes. • Monitor and report on the performance and usage of governance tools. • Stay up-to-date with the latest developments in data governance and related technologies. • Work closely with the product development team to provide feedback and suggestions for tool enhancements. Data Quality: • Utilized Alex Solutions' data quality tools to develop and implement processes, standards, and guidelines that ensure data accuracy and reliability. • Conducted comprehensive data profiling using Alex Solutions, identifying and rectifying data anomalies and inconsistencies. • Monitored data quality metrics through Alex Solutions, providing regular reports on data quality issues and improvements to stakeholders. • Collaborated with clients to understand their data quality needs and provided tailored solutions using Alex Solutions. • Implemented data cleansing, validation, and enrichment processes within the Alex Solutions platform to maintain high data quality standards. • Developed and maintained detailed documentation for data quality processes and best practices using Alex Solutions' tools. Preferred Skills: Must Have Skills: Alex Solutions Good to Have: Unity Catalog, Microsoft Purview, Data Quality tool Secondary Skills: Informatica, Collibra Experience with data cataloging, data lineage, data quality and metadata management. • Knowledge of regulatory requirements related to data governance (e.g., GDPR, CCPA). • Familiarity with cloud platforms and services (e.g., AWS, Azure, Google Cloud). • Certification in data governance or related fields. • Proven experience with data governance, data quality tools and technologies. • Strong understanding of data governance principles and best practices. • Proficiency in SQL, data modeling, and database management. • Excellent problem-solving and analytical skills. • Strong communication and interpersonal skills.
Posted 3 months ago
3.0 - 7.0 years
22 - 25 Lacs
Bengaluru
Hybrid
Role & responsibilities 3-6 years of experience in Data Engineering Pipeline Ownership and Quality Assurance, with hands-on expertise in building, testing, and maintaining data pipelines. Proficiency with Azure Data Factory (ADF), Azure Databricks (ADB), and PySpark for data pipeline orchestration and processing large-scale datasets. Strong experience in writing SQL queries and performing data validation, data profiling, and schema checks. Experience with big data validation, including schema enforcement, data integrity checks, and automated anomaly detection. Ability to design, develop, and implement automated test cases to monitor and improve data pipeline efficiency. Deep understanding of Medallion Architecture (Raw, Bronze, Silver, Gold) for structured data flow management. Hands-on experience with Apache Airflow for scheduling, monitoring, and managing workflows. Strong knowledge of Python for developing data quality scripts, test automation, and ETL validations. Familiarity with CI/CD pipelines for deploying and automating data engineering workflows. Solid data governance and data security practices within the Azure ecosystem. Additional Requirements: Ownership of data pipelines ensuring end-to-end execution, monitoring, and troubleshooting failures proactively. Strong stakeholder management skills, including follow-ups with business teams across multiple regions to gather requirements, address issues, and optimize processes. Time flexibility to align with global teams for efficient communication and collaboration. Excellent problem-solving skills with the ability to simulate and test edge cases in data processing environments. Strong communication skills to document and articulate pipeline issues, troubleshooting steps, and solutions effectively. Experience with Unity Catalog or willingness to learn. Preferred candidate profile Immediate Joiner's
Posted 3 months ago
10.0 - 15.0 years
8 - 18 Lacs
Kochi
Remote
10 yrs of exp working in cloud-native data (Azure Preferred),Databricks, SQL,PySpark, migrating from Hive Metastore to Unity Catalog, Unity Catalog, implementing Row-Level Security (RLS), metadata-driven ETL design patterns,Databricks certifications
Posted 3 months ago
5.0 - 10.0 years
14 - 24 Lacs
Bengaluru
Remote
Detailed job description - Skill Set: Strong Knowledge in Databricks. This includes creating scalable ETL (Extract, Transform, Load) processes, data lakes Strong knowledge in Python and SQL Strong experience with AWS cloud platforms is a must Good understanding of data modeling principles and data warehousing concepts Strong knowledge of optimizing ETL jobs, batch processing jobs to ensure high performance and efficiency Implementing data quality checks, monitoring data pipelines, and ensuring data consistency and security Hands on experience with Databricks features like Unity Catalog Mandatory Skills Databricks, AWS
Posted 3 months ago
7.0 - 12.0 years
27 - 35 Lacs
Kolkata, Hyderabad, Bengaluru
Work from Office
Band 4c & 4D Skill set -Unity Catalog + Python , Spark , Kafka Inviting applications for the role of Lead Consultant- Databricks Developer with experience in Unity Catalog + Python , Spark , Kafka for ETL! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities Develop and maintain scalable ETL pipelines using Databricks with a focus on Unity Catalog for data asset management. Implement data processing frameworks using Apache Spark for large-scale data transformation and aggregation. Integrate real-time data streams using Apache Kafka and Databricks to enable near real-time data processing. Develop data workflows and orchestrate data pipelines using Databricks Workflows or other orchestration tools. Design and enforce data governance policies, access controls, and security protocols within Unity Catalog. Monitor data pipeline performance, troubleshoot issues, and implement optimizations for scalability and efficiency. Write efficient Python scripts for data extraction, transformation, and loading. Collaborate with data scientists and analysts to deliver data solutions that meet business requirements. Maintain data documentation, including data dictionaries, data lineage, and data governance frameworks. Qualifications we seek in you! Minimum qualifications Bachelors degree in Computer Science, Data Engineering, or a related field. experience in data engineering with a focus on Databricks development. Proven expertise in Databricks, Unity Catalog, and data lake management. Strong programming skills in Python for data processing and automation. Experience with Apache Spark for distributed data processing and optimization. Hands-on experience with Apache Kafka for data streaming and event processing. Proficiency in SQL for data querying and transformation. Strong understanding of data governance, data security, and data quality frameworks. Excellent communication skills and the ability to work in a cross-functional environ Must have experience in Data Engineering domain . Must have implemented at least 2 project end-to-end in Databricks. Must have at least experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have good understanding to create complex data pipeline Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test Must have strong communication skills and have worked on the team of size 5 plus Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. Good to have Databricks SQL Endpoint understanding. Good To have CI/CD experience to build the pipeline for Databricks jobs. Good to have if worked on migration project to build Unified data platform. Good to have knowledge of DBT. Good to have knowledge of docker and Kubernetes.
Posted 3 months ago
7.0 - 12.0 years
15 - 22 Lacs
Bengaluru
Hybrid
Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.
Posted 3 months ago
5.0 - 10.0 years
14 - 24 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Greetings from LTIMindtree! Job Description Notice Period:- 0 to 30 Days only Experience:- 5 to 12 Years Interview Mode :- 2 rounds (One round is F2F) Hybrid (2-3 WFO) Brief Description of Role Job Summary: We are seeking an experienced and strategic Data Architect to design, build, and optimize scalable, secure, and high-performance data solutions. You will play a pivotal role in shaping our data infrastructure, working with technologies such as Databricks, Azure Data Factory, Unity Catalog , and Spark , while aligning with best practices in data governance, pipeline automation , and performance optimization . Key Responsibilities: Design and develop scalable data pipelines using Databricks and Medallion Architecture (Bronze, Silver, Gold layers). • Architect and implement data governance frameworks using Unity Catalog and related tools. • Write efficient PySpark and SQL code for data transformation, cleansing, and enrichment. • Build and manage data workflows in Azure Data Factory (ADF) including triggers, linked services, and integration runtimes. • Optimize queries and data structures for performance and cost-efficiency . • Develop and maintain CI/CD pipelines using GitHub for automated deployment and version control. • Collaborate with cross-functional teams to define data strategies and drive data quality initiatives. • Implement best practices for DevOps, CI/CD , and infrastructure-as-code in data engineering. • Troubleshoot and resolve performance bottlenecks across Spark, ADF, and Databricks pipelines. • Maintain comprehensive documentation of architecture, processes, and workflows . Requirements: Bachelors or masters degree in computer science, Information Systems, or related field. • Proven experience as a Data Architect or Senior Data Engineer. • Strong knowledge of Databricks , Azure Data Factory , Spark (PySpark) , and SQL . • Hands-on experience with data governance , security frameworks , and catalog management . • Proficiency in cloud platforms (preferably Azure). • Experience with CI/CD tools and version control systems like GitHub. • Strong communication and collaboration skills.
Posted 3 months ago
6.0 - 11.0 years
18 - 33 Lacs
Bengaluru
Remote
Role & responsibilities Mandatory skills: ADB AND UNITY CATALOG Job Summary: We are looking for a skilled StData Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills. Preferred candidate profile
Posted 3 months ago
5.0 - 8.0 years
6 - 24 Lacs
Hyderabad
Work from Office
Notice 30 to 45 days. * Design, develop & maintain data pipelines using PySpark, Databricks ,Unity Catalog & cloud. * Collaborate with cross-functional teams on ETL processes & report development. Share resume : garima.arora@anetcorp.com
Posted 3 months ago
12.0 - 20.0 years
22 - 37 Lacs
Bengaluru
Hybrid
12+ yrs of experience in Data Architecture Strong in Azure Data Services & Databricks, including Delta Lake & Unity Catalog Experience in Azure Synapse, Purview, ADF, DBT, Apache Spark,DWH,Data Lakes, NoSQL,OLTP NP-Immediate sachin@assertivebs.com
Posted 3 months ago
8.0 - 13.0 years
10 - 20 Lacs
Hyderabad, Pune
Work from Office
Job Title: Databricks Administrator Client: Wipro Employer: Advent Global Solutions Location: Hyderabad / Pune Work Mode: Hybrid Experience: 8+ Years (8 Years Relevant in Databricks Administration) CTC: 22.8 LPA Notice Period: Immediate Joiners to 15 Days Shift: General Shift Education Preferred: B.Tech / M.Tech / MCA / B.Sc (Computer Science) Key words:- Databricks Administration Unity Catalog Cluster Creation, Tuning & ADministration RBAC in Unity Catalog Cloud Administration in GCP preferably else ok with AWS/Azure Knowledge Databricks on 80% and cloud on 20% Mandatory Skills Databricks Admin on GCP/AWS Job Description: • Responsibilities will include designing, implementing, and maintaining the Databricks platform, and providing operational support. Operational support responsibilities include platform set-up and configuration, workspace administration, resource monitoring, providing technical support to data engineering, Data Science/ML, and Application/integration teams, performing restores/recoveries, troubleshooting service issues, determining the root causes of issues, and resolving issues. • The position will also involve the management of security and changes. • The position will work closely with the Team Lead, other Databricks Administrators, System Administrators, and Data Engineers/Scientists/Architects/Modelers/Analysts. Responsibilities: • Responsible for the administration, configuration, and optimization of the Databricks platform to enable data analytics, machine learning, and data engineering activities within the organization. • Collaborate with the data engineering team to ingest, transform, and orchestrate data. • Manage privileges over the entire Databricks account, as well as at the workspace level, Unity Catalog level and SQL warehouse level. • Create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions. • Install, configure, and maintain Databricks clusters and workspaces. • Maintain Platform currency with security, compliance, and patching best practices. • Monitor and manage cluster performance, resource utilization, platform costs, and troubleshoot issues to ensure optimal performance. • Implement and manage access controls and security policies to protect sensitive data. • Manage schema data with Unity Catalog - create, configure, catalog, external storage, and access permissions. • Administer interfaces with Google Cloud Platform. Required Skills: • 3+ years of production support of the Databricks platform Preferred: • 2+ years of experience of AWS/Azure/GCP PaaS admin • 2+ years of experience in automation frameworks such as Terraform Role & responsibilities Preferred candidate profile
Posted 3 months ago
5 - 10 years
16 - 27 Lacs
Pune, Chennai, Bengaluru
Hybrid
If interested pls share the below details on PriyaM4@hexaware.com Total Exp CTC ECTC NP Loc MUST Have skill- Unity Catalog We are looking for a skilled Sr Data Engineer /with expertise in Databricks and Unity Catalog to design, implement, and manage scalable data solutions. Key Responsibilities: • Design and implement scalable data pipelines and ETL workflows using Databricks. • Implement Unity Catalog for data governance, access control, and metadata management across multiple workspaces. • Develop Delta Lake architectures for optimized data storage and retrieval. • Establish best practices for data security, compliance, and lineage tracking in Unity Catalog. • Optimize data lakehouse architecture for performance and cost efficiency. • Collaborate with data scientists, engineers, and business teams to support analytical workloads. • Monitor and troubleshoot Databricks clusters, performance tuning, and cost management. • Implement data quality frameworks and observability solutions to maintain high data integrity. • Work with Azure/AWS/GCP cloud environments to deploy and manage data solutions. Required Skills & Qualifications: • 8-19 years of experience in data engineering, data architecture, or cloud data solutions. • Strong hands-on experience with Databricks and Unity Catalog. • Expertise in PySpark, Scala, or SQL for data processing. • Deep understanding of Delta Lake, Lakehouse architecture, and data partitioning strategies. • Experience with RBAC, ABAC, and access control mechanisms within Unity Catalog. • Knowledge of data governance, compliance standards (GDPR, HIPAA, etc.), and audit logging. • Familiarity with cloud platforms (Azure, AWS, or GCP) and their respective data services. • Strong understanding of CI/CD pipelines, DevOps, and Infrastructure as Code (IaC). • Experience integrating BI tools (Tableau, Power BI, Looker) and ML frameworks is a plus. • Excellent problem-solving, communication, and collaboration skills.
Posted 4 months ago
8 - 10 years
11 - 21 Lacs
Noida, Mumbai (All Areas)
Work from Office
As the Full Stack Developer within the Data and Analytics team, you will be responsible for delivery of innovative data and analytics solutions, ensuring Al Futtaim Business stays at the forefront of technical development.
Posted 4 months ago
6 - 9 years
15 - 25 Lacs
Pune, Chennai, Bengaluru
Hybrid
Sharing the JD for your reference : Experience : 6-10+ yrs Primary skills set : Azure Databricks , ADF SQL , Unity CATALOG, Pyspark/Python Kindly, share the following details : Updated CV Relevant Skills Total Experience Current CTC Expected CTC Notice Period Current Location Preferred Location
Posted 4 months ago
8 - 12 years
13 - 18 Lacs
Bengaluru
Work from Office
Role & responsibilities Job Summary: We are seeking a highly skilled and motivated Data Governance Executor to join our team. The ideal candidate will be responsible for implementing the data governance frameworks focus on data governance solution using Unity Catalog and Azure Purview. This role will ensure implementation of data quality standardization, Data Classification, and Data Governance Polices execution. Key Responsibilities: Data Governance Solution Implementation: Develop and implement data governance policies and procedures using Unity Catalog and Azure Purview. Ensure data governance frameworks align with business objectives and regulatory requirements. Data Catalog Management: Manage and maintain the Unity Catalog, ensuring accurate and up-to-date metadata. Oversee the classification and organization of data assets within Azure Purview. Data Quality Assurance: Implement data quality standards with Data Engineer and perform regular audits to ensure data accuracy and integrity. Collaborate with data stewards to resolve data quality issues. Stakeholder Collaboration: Work closely with data owners, stewards, and business stakeholders to understand data needs and requirements. Provide training and support to ensure effective use of data governance tools. Reporting and Documentation: Generate reports on data governance metrics and performance. Maintain comprehensive documentation of data governance processes and policies. Qualifications: Education: Bachelor's degree in Computer Science, Information Systems, or a related field. Master's degree preferred. Experience: Proven experience in data governance, data management, or related roles. 2+ years hands-on experience with Unity Catalog and Azure Purview. Skills: Strong understanding of data governance principles and best practices. Proficiency in data cataloging, metadata management, and data quality assurance. Excellent analytical, problem-solving, and communication skills. Ability to work collaboratively with cross-functional teams. Preferred Qualifications: Certification in data governance or related fields. Experience with other data governance tools and platforms. Knowledge of cloud data platforms and services. Preferred candidate profile
Posted 4 months ago
8 - 12 years
20 - 30 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Role & responsibilities As a Cloud Technical Lead- Data, you will get to: Build and maintain data pipelines to enable faster, better, data-informed decision-making through customer enterprise business analytics Collaborate with stakeholders to understand their strategic objectives and identify opportunities to leverage data and data quality Design, develop and maintain large-scale data solutions on Azure cloud platform Implement ETL pipelines using Azure Data Factory, Azure Databricks, and other related services Develop and deploy data models and data warehousing solutions using Azure Synapse Analytics, Azure SQL Database. Optimize performing, robust, and resilient data storage solutions using Azure Blob Storage, Azure Data Lake, Snowflake and other related services Develop and implement data security policies to ensure compliance with industry standards Provide support for data-related issues, and mentor junior data engineers in the team Define and manage data governance policies to ensure data quality and compliance with industry standards Collaborate with data architects, data scientists, developers, and business stakeholders to design data solutions that meet business requirements Coordinates with users to understand data needs and delivery of data with a focus on data quality, data reuse, consistency, security, and regulatory compliance. Conceptualize and visualize data frameworks. Preferred candidate profile Bachelors degree in computer science, Information Technology, or related field 8+ years of experience in data engineering with 3+ years hands on Databricks (DB) experience. Strong expertise in Microsoft Azure cloud platform and services, particularly Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database Extensive experience working with large data sets with hands-on technology skills to design and build robust data architecture and data modeling and database design. Strong programming skills in SQL, Python and Pyspark Experience in Unity catalog & DBT and data governance knowledge. Good to have experience in Snowflake utilities such as SnowSQL, SnowPipe, Tasks, Streams, Time travel, Optimizer, Metadata Manager, data sharing, and stored procedures. Agile development environment experience applying DEVOPS along with data quality and governance principles. Good leadership skills to guide and mentor the work of less experienced personnel Ability to contribute to continual improvement by suggesting improvements to Architecture or new technologies and mentoring junior employees and being ready to shoulder ad-hoc. Experience with cross-team collaboration, interpersonal skills/relationship building Ability to effectively communicate through presentation, interpersonal, verbal, and written skills.
Posted 4 months ago
7.0 - 12.0 years
19 - 34 Lacs
bengaluru
Work from Office
Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering .
Posted Date not available
4.0 - 9.0 years
6 - 16 Lacs
hyderabad, pune, bengaluru
Work from Office
Overview: As a Data Engineer, you will work with multiple teams to deliver solutions on the AWS Cloud using core cloud data engineering tools such as Databricks on AWS, AWS Glue, Amazon Redshift, Athena, and other Big Data-related technologies. This role focuses on building the next generation of application-level data platforms and improving recent implementations. Hands-on experience with Apache Spark (PySpark, SparkSQL), Delta Lake, Iceberg, and Databricks is essential. Responsibilities: • Design and develop data lakes, manage data flows that integrate information from various sources into a common data lake platform through an ETL Tool • Code and manage delta lake implementations on S3 using technologies like Databricks or Apache Hoodie • Triage, debug and fix technical issues related to Data Lakes • Design and Develop Data warehouses for Scale • Design and Evaluate Data Models (Star, Snowflake and Flattened) • Design data access patterns for OLTP and OLAP based transactions • Coordinate with Business and Technical teams through all the phases in the software development life cycle • Participate in making major technical and architectural decisions • Maintain and Manage Code repositories like Git Must Have : • 5+ Years of Experience operating on AWS Cloud with building Data Lake architectures • 3+ Years of Experience with AWS Data services like S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS and Redshift • 3+ Years of Experience building Data Warehouses on Snowflake, Redshift, HANA, Teradata, Exasol etc. • 3+ Years of working knowledge in Spark • 3+ Years of Experience in building Delta Lakes using technologies like Apache Hoodie or Data bricks • 3+ Years of Experience working on any ETL tools and technologies • 3+ Years of Experience in any programming language (Python, R, Scala, Java • Bachelors degree in computer science, information technology, data science, data analytics or related field • Experience working on Agile projects and Agile methodology in general Good To Have: • Strong understanding of RDBMS principles and advanced data modelling techniques. • AWS cloud certification (e.g., AWS Certified Data Analytics Specialty) is a strong plus. Key Skills: • Languages: Python, SQL, PySpark • Big Data Tools: Apache Spark, Databricks, Apache Hudi • Databricks on AWS • AWS Services: S3, Glue, Lake Formation, EMR, Kinesis, RDS, DMS, Redshift • Data warehouses: Snowflake, Redshift, HANA, Teradata, Exasol • Data Modelling: Star Schema, Snowflake Schema, Flattened Models • DevOps & CI/CD: Git, Agile Methodology, ETL Methodology
Posted Date not available
4.0 - 8.0 years
8 - 16 Lacs
kolkata, hyderabad, bengaluru
Hybrid
Role: Sr. Databricks Developer Description - External With a startup spirit and 100,000+ curious and courageous minds, we have the expertise to go deep with the worlds biggest brandsand we have fun doing it. Now, we’re calling all you rule-breakers and risk-takers who see the world differently and are bold enough to reinvent it. Responsibilities Closely work with Architect and lead to design solutions to meet functional and non-functional requirements. Participate to understand architecture and solution design artifacts. Evangelize re-use through the implementation of shared assets. Proactively implement engineering methodologies, standards, and leading practices. Provide insight and direction on roles and responsibilities required for solution operations. Identify, communicate and mitigate Risks, Assumptions, Issues, and Decisions throughout the full lifecycle. Considers the art of the possible, compares various solution options based on feasibility and impact, and proposes actionable plans. Demonstrate strong analytical and technical problem-solving skills. Ability to analyze and operate at various levels of abstraction. Ability to balance what is strategically right with what is practically realistic. Qualifications we seek in you! Minimum qualifications Excellent technical skills to enabling the creation of future-proof, complex global solutions Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. Work with architect and leads for solutioning to meet functional and non-functional requirements. Demonstrated knowledge of relevant industry trends and standards. Demonstrate strong analytical and technical problem-solving skills. Must have excellent coding skills either Python or Scala, preferably Python. Must have at least 5+ years of experience in Data Engineering domain with total of 7+ years. Must have implemented at least 2 project end-to-end in Databricks. Must have at least 2+ years of experience on databricks which consists of various components as below Delta lake dbConnect db API 2.0 Databricks workflows orchestration Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. Must have strong understanding of Data warehousing and various governance and security standards around Databricks. Must have knowledge of cluster optimization and its integration with various cloud services. Must have good understanding to create complex data pipeline. Must have good knowledge of Data structure & algorithms. Must be strong in SQL and sprak-sql. Must have strong performance optimization skills to improve efficiency and reduce cost. Must have worked on both Batch and streaming data pipeline. Must have extensive knowledge of Spark and Hive data processing framework. Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. Must be strong in writing unit test case and integration test. Must have strong communication skills and have worked on the team of size 5 plus. Must have great attitude towards learning new skills and upskilling the existing skills.
Posted Date not available
10.0 - 19.0 years
40 - 50 Lacs
hyderabad, bengaluru
Hybrid
Role & responsibilities As a trusted advisor, you'll forge strong relationships with our customers, gaining deep insights into their business objectives and long-term aspirations. Armed with this understanding, you'll provide expert guidance on how technology can be leveraged to propel them towards unprecedented success. From aligning processes to technology to developing and deploying bespoke solutions, you'll be the visionary architect behind their digital transformation journey. Your expertise in application development and deployment best practices will ensure seamless integration and optimized performance. Data integration from SAP and Non-SAP sources to SAP Business Data Cloud. Create complex SAC Stories using multiple sources. Integrate Power BI with BDC for Analytical capabilities .Handle complex transformation requirements using SQL, Python in BDC environments .Create notebooks in Databricks platform for data integration and transformation scenarios In this role, your impact will be immense. You'll conduct thorough needs assessments, uncovering the requirements for new applications or upgrades to existing ones, and document these specifications with utmost precision using cutting-edge Business Analysis (BA) methodologies. Whether it's crafting comprehensive use cases, tracing requirements meticulously, or visualizing process flows, your attention to detail will be unmatched. Drawing upon your exceptional analytical prowess, you'll gather requirements from stakeholders and masterfully translate them into functional and nonfunctional specifications. As the driving force behind our customers' application modernization endeavors, you'll guide them through every step of the process, ensuring their systems are future-proofed and optimized for success. Your expertise will also come into play as you assist customers in selecting and customizing the perfect packaged solutions to fulfill their unique business needs. Preferred candidate profile Minimum 3 years of experience in data integration with SAP, Non-SAP sources to Business Data Cloud/Datasphere Minimum 5 years of experience in SAP Analytics Cloud Minimum 1 year experience in Databricks Unity Catalog Premium outbound integrations with hyperscaler and ISV platforms Creation standard and API based CDS views in S/4HANA Private and Public Clouds Integration of external SaaS / Web based applications with BDC SAP Analytics Planning Experience SAP Datasphere and SAC Security experience SAP BW, BW/4HANA Experience
Posted Date not available
5.0 - 10.0 years
15 - 25 Lacs
hyderabad/secunderabad, bangalore/bengaluru, delhi / ncr
Hybrid
Ready to build the future with AI? At Genpact, we don't just keep up with technology we set the pace. AI and digital innovation are redefining industries, and were leading the charge. Genpacts AI Gigafactory , our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of whats possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook . Inviting applications for the role of Principal Consultant- Databricks Developer AWS! In this role, the Databricks Developer is responsible for solving the real world cutting edge problem to meet both functional and non-functional requirements. Responsibilities • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have experience in Data Engineering domain . Qualifications we seek in you! Minimum qualifications • Bachelor’s Degree or equivalency (CS, CE, CIS, IS, MIS, or engineering discipline) or equivalent work experience. • Maintains close awareness of new and emerging technologies and their potential application for service offerings and products. • Work with architect and lead engineers for solutions to meet functional and non-functional requirements. • Demonstrated knowledge of relevant industry trends and standards. • Demonstrate strong analytical and technical problem-solving skills. • Must have excellent coding skills either Python or Scala, preferably Python. • Must have experience in Data Engineering domain . • Must have implemented at least 2 project end-to-end in Databricks. • Must have at least experience on databricks which consists of various components as below o Delta lake o dbConnect o db API 2.0 o Databricks workflows orchestration • Must be well versed with Databricks Lakehouse concept and its implementation in enterprise environments. • Must have good understanding to create complex data pipeline • Must have good knowledge of Data structure & algorithms. • Must be strong in SQL and sprak-sql. • Must have strong performance optimization skills to improve efficiency and reduce cost. • Must have worked on both Batch and streaming data pipeline. • Must have extensive knowledge of Spark and Hive data processing framework. • Must have worked on any cloud (Azure, AWS, GCP) and most common services like ADLS/S3, ADF/Lambda, CosmosDB/DynamoDB, ASB/SQS, Cloud databases. • Must be strong in writing unit test case and integration test • Must have strong communication skills and have worked on the team of size 5 plus • Must have great attitude towards learning new skills and upskilling the existing skills. Preferred Qualifications Good to have Unity catalog and basic governance knowledge. • Good to have Databricks SQL Endpoint understanding. • Good To have CI/CD experience to build the pipeline for Databricks jobs. • Good to have if worked on migration project to build Unified data platform. • Good to have knowledge of DBT. • Good to have knowledge of docker and Kubernetes. br /> Why join Genpact? Lead AI-first transformation – Build and scale AI solutions that redefine industries. Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills Grow with the best – Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace Committed to ethical AI – Work in an environment where governance, transparency, and security are at the core of everything we build Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.
Posted Date not available
5.0 - 10.0 years
9 - 19 Lacs
pune, chennai, bengaluru
Work from Office
Exp : 4- 10 years Location : Pan India Notice : Immediate to 15 days Required Skills & Qualifications: Proven experience with Databricks , including Unity Catalog in production environments. Strong understanding of data governance , security , and compliance frameworks. Hands-on expertise in PySpark , Parquet , and data engineering tools. Experience with Delta Lake , Delta Sharing , and data quality frameworks . Familiarity with cloud platforms (AWS, Azure, or GCP). Excellent problem-solving and communication skills. Ability to work independently and in a collaborative team environment.
Posted Date not available
4.0 - 8.0 years
7 - 17 Lacs
hyderabad
Work from Office
Position Overview: We are seeking a self-driven Data Engineer with 4-7 years of experience to build and optimize scalable ETL/ELT pipelines using Azure Databricks, PySpark, and Delta Lake. The role involves working across scrum teams to develop data solutions, ensure data governance with Unity Catalog, and support real-time and batch processing. Strong problem-solving skills, T-SQL expertise, and hands-on experience with Azure cloud tools are essential. Healthcare domain knowledge is a plus. Job Description: Work with different scrum teams to develop all the quality database programming requirements of the sprint. Experience in Azure cloud platforms like Advanced Python Programming, Databricks , Azure SQL , Data factory (ADF), Data Lake, Data storage, SSIS. Create and deploy scalable ETL/ELT pipelines with Azure Databricks by utilizing PySpark and SQL . Create Delta Lake tables with ACID transactions and schema evolution to support real-time and batch processing. Experience in Unity Catalog for centralized data governance, access control, and data lineage tracking. Independently analyse, solve, and correct issues in real time, providing problem resolution end-to-end. Develop unit tests to be able to test them automatically. Use SOLID development principles to maintain data integrity and cohesiveness. Interact with product owner and business representatives to determine and satisfy needs. Sense of ownership and pride in your performance and its impact on companys success. Critical thinker and problem-solving skills. Team player. Good time-management skills. Great interpersonal and communication skills. Mandatory Qualifications: 4-7 years of experience as a Data Engineer. Self-driven with minimal supervision. Proven experience with T-SQL programming, Azure Databricks, Spark (PySpark/Scala), Delta Lake, Unity Catalog, ADLS Gen2 Microsoft TFS, Visual Studio, Devops exposure. Experience with cloud platforms such as Azure or any. Analytical, problem-solving mindset. Preferred Qualifications HealthCare domain knowledge
Posted Date not available
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |