Home
Jobs
Companies
Resume

20 Databricks Engineer Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

7 - 14 Lacs

Bengaluru

Hybrid

Naukri logo

Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards

Posted 1 week ago

Apply

5.0 - 7.0 years

1 - 1 Lacs

Hyderabad

Remote

Naukri logo

Databricks and Informatica Intelligent Data Management Cloud (IDMC) consultant

Posted 1 week ago

Apply

7.0 - 12.0 years

15 - 22 Lacs

Bengaluru

Hybrid

Naukri logo

Job Summary: We are seeking a talented Data Engineer with strong expertise in Databricks, specifically in Unity Catalog, PySpark, and SQL, to join our data team. Youll play a key role in building secure, scalable data pipelines and implementing robust data governance strategies using Unity Catalog. Key Responsibilities: Design and implement ETL/ELT pipelines using Databricks and PySpark. Work with Unity Catalog to manage data governance, access controls, lineage, and auditing across data assets. Develop high-performance SQL queries and optimize Spark jobs. Collaborate with data scientists, analysts, and business stakeholders to understand data needs. Ensure data quality and compliance across all stages of the data lifecycle. Implement best practices for data security and lineage within the Databricks ecosystem. Participate in CI/CD, version control, and testing practices for data pipelines. Required Skills: Proven experience with Databricks and Unity Catalog (data permissions, lineage, audits). Strong hands-on skills with PySpark and Spark SQL. Solid experience writing and optimizing complex SQL queries. Familiarity with Delta Lake, data lakehouse architecture, and data partitioning. Experience with cloud platforms like Azure or AWS. Understanding of data governance, RBAC, and data security standards. Preferred Qualifications: Databricks Certified Data Engineer Associate or Professional. Experience with tools like Airflow, Git, Azure Data Factory, or dbt. Exposure to streaming data and real-time processing. Knowledge of DevOps practices for data engineering.

Posted 1 week ago

Apply

9.0 - 14.0 years

8 - 18 Lacs

Bhopal, Hyderabad, Pune

Hybrid

Naukri logo

Urgent Opening for Data Architect Position Exp : Min 9yrs Salary : As per industry NP : Immediate joiners are preferred Skills : Databricks, ADF, Python, Synapse JD Job Description Work youll do This role is responsible for data architecture and support in a fast-paced cross-cultural diverse environment leveraging the Agile methodology. This role requires a solid understanding of taking business requirements to define data models and underlying data structures that support a data architecture's design. The person that fills this role is expected to work closely with product owners and a cross-functional team comprising business analysts, software engineers, functional and non- functional testers, operations engineers and project management. Key Responsibilities Collaborate with product managers, designers, and fellow developers to design, develop, and maintain web-based applications and software solutions. Write clean, maintainable, and efficient code, adhering to coding standards and best practices. Perform code reviews to ensure code quality and consistency. Troubleshoot and debug software defects and issues, providing timely solutions. Participate in the entire software development lifecycle, from requirements gathering to deployment and support. Stay up to date with industry trends and emerging technologies, incorporating them into the development process when appropriate. Mentor junior developers and provide technical guidance when needed. Qualifications : Technical Skills: Bachelor's degree in computer science, Software Engineering, or a related field. Strong experience in Data Engineering Well versed with Data Architecture 10+ years of professional experience in data engineering & architecture Advanced understanding data modelling and design. Strong database management and design, SQL Server preferred. Strong understanding of Databricks & Azure Synapse. Understanding of data pipeline/ETL frameworks and libraries. Experience with Azure Cloud Components (PaaS) and DevOps is required. Experience working in Agile and SAFe development processes. Excellent problem-solving and analytical skills. Other Skills Strong organizational and communication skills. Flexibility, energy and ability to work well with others in a team environment The ability to effectively manage multiple assignments and responsibilities in a fast-paced environment Expert problem solver. Finding simple answers to complex questions or problems. Should be able to learn and upskill on new technologies Drive for results partner with product owners to deliver on short- and long-term milestones Experience working with product owners and development teams to document and clarify business and user requirements and manage scope of defined features and functions during project lifecycle Critical thinking-able to think outside the box; use knowledge gained through prior experience, education, and training to resolve issues and remove project barriers Strong written and verbal communication skills with the ability to present and to collaborate with business leaders Experience interfacing with external software design and development vendors preferred Being a team player that can deliver in a high pressure and high demanding environment If anyone interested, Kindly share your resume on vidya.raskar@newvision-software.com Regards Vidya

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Hi All , we have senior position for databricks expert Job Location :Pune and Bangalore(hybrid) Perks :pick and drop provided Role & responsibilities Kindly Note : Overall experience should be 7 Yrs+ and immediate joiner Data Engineering - Data pipeline development using Azure Databricks 5+ years • Optimizing data processing performance, efficient resource utilization and execution time. Workflow orchestration 5+ years • Databricks features like Databricks SQL, Delta Lake, and Workflows to orchestrate and manage complex data workflows – 5 + years • Data modelling – 5 + Years 6. Nice to Haves: Knowledge of PySparks, Good knowledge of data warehousing

Posted 1 week ago

Apply

6.0 - 8.0 years

15 - 19 Lacs

Pune

Work from Office

Naukri logo

Responsibilities Design, build,maintain ,scalable and efficient data pipelines using PySpark, Spark SQL, and optionally Scala. Develop and manage data solutions on the Databricks platform, utilizing Workspace, Jobs, (DLT), Repos, and Unity Catalog.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 Lacs

Noida, Chennai, Bengaluru

Work from Office

Naukri logo

Responsibilities Lead the design, development, and implementation of big data solutions using Apache Spark and Databricks. Architect and optimize data pipelines and workflows to process large volumes of data efficiently. Utilize Databricks features such as Delta Lake, Databricks SQL, and Databricks Workflows to enhance data processing and analytics capabilities. Collaborate with data engineers, data scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions. Implement best practices for data engineering, including data quality, data governance, and data security. Monitor and troubleshoot performance issues in Spark jobs and Databricks clusters. Mentor and guide junior engineers in the team, promoting a culture of continuous learning and improvement. Stay up-to-date with the latest advancements in Spark and Databricks technologies and incorporate them into the team's practices.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Job Description: In this vital role you will be responsible for the development and implementation of our data strategy. The ideal candidate possesses a strong blend of technical expertise and data-driven problem-solving skills. As a Data Engineer, you will play a crucial role in building, and optimizing our data pipelines and platforms in a SAFE Agile product team. Chip in to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions. Deliver for data pipeline projects from development to deployment, managing, timelines, and risks. Ensure data quality and integrity through meticulous testing and monitoring. Leverage cloud platforms (AWS, Databricks) to build scalable and efficient data solutions. Work closely with product team, and key collaborators to understand data requirements. Enforce to data engineering industry standards and standards. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Familiarity with code versioning using GIT and code migration tools. Familiarity with JIRA. Stay up to date with the latest data technologies and trends What we expect of you Basic Qualifications: Doctorate degree OR Masters degree and 4 to 6 years of Information Systems experience OR Bachelors degree and 6 to 8 years of Information Systems experience OR Diploma and 10 to 12 years of Information Systems experience. Demonstrated hands-on experience with cloud platforms (AWS, Azure, GCP) Proficiency in Python, PySpark, SQL. Development knowledge in Databricks. Good analytical and problem-solving skills to address sophisticated data challenges. Preferred Qualifications: Experienced with data modeling Experienced working with ETL orchestration technologies Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and DevOps Familiarity with SQL/NOSQL database Soft Skills: Skilled in breaking down problems, documenting problem statements, and estimating efforts. Effective communication and interpersonal skills to collaborate with multi-functional teams. Excellent analytical and problem solving skills. Strong verbal and written communication skills Ability to work successfully with global teams High degree of initiative and self-motivation. Team-oriented, with a focus on achieving team goals

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 - 3 Lacs

Hyderabad

Work from Office

Naukri logo

Job Overview: We are seeking a skilled and proactive Machine Learning Engineer to join our smart manufacturing initiative. You will play a pivotal role in building data pipelines, developing ML models for defect prediction, and implementing closed-loop control systems to improve production quality. Responsibilities: Data Engineering & Pipeline Support: Validate and ensure correct data flow from Influx DB/CDL to Smart box/Databricks platforms. Collaborate with data scientists to support model development through accurate data provisioning. Provide ongoing support in resolving data pipeline issues and performing ad-hoc data extractions. ML Model Development: Develop three distinct ML models to predict different types of defects using historical production data. Predict short-term outcomes (next 5 minutes) using techniques like artificial sampling and dimensionality reduction. Ensure high model performance: Accuracy 95%, Precision & Recall 80%. Extract and present feature importance to support model interpretability. Closed-loop Control Architecture: Implement end-to-end ML-driven automation to proactively correct machine settings based on model predictions. Key architecture components include: Real-time data ingestion from PLCs via Influx DB/CDL. Model deployment and inference on Smart box. Output pipeline to share actionable recommendations via PLC tags. Automated retraining pipeline in the cloud triggered by model drift or recommendation deviations. Qualifications: Proven experience with real-time data streaming from industrial systems (PLCs, Influx DB/CDL). Hands-on experience in building and deploying ML models in production. Strong understanding of data preprocessing, dimensionality reduction, and synthetic data techniques. Familiarity with cloud-based retraining workflows and model performance monitoring. Experience in smart manufacturing or predictive maintenance is a plus.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Chennai

Hybrid

Naukri logo

The Operations Engineer will work in collaboration with and under the direction of the Manager of Data Engineering, Advanced Analytics to provide operational services, governance, and incident management solutions for the Analytics team. This includes modifying existing data ingestion workflows, releases to QA and Prod, working closely with cross functional teams and providing production support for daily issues. Essential Job Functions: * Takes ownership of customer issues reported and see problems through to resolution * Researches, diagnoses, troubleshoots and identifies solutions to resolve customer issues * Follows standard procedures for proper escalation of unresolved issues to the appropriate internal teams * Provides prompt and accurate feedback to customers * Ensures proper recording and closure of all issues * Prepares accurate and timely reports * Documents knowledge in the form of knowledge base tech notes and articles Other Responsibilities: * Be part of on-call rotation * Support QA and production releases, off-hours if needed * Work with developers to troubleshoot issues * Attend daily standups * Create and maintain support documentation (Jira/Confluence) Minimum Qualifications and Job Requirements: * Proven working experience in enterprise technical support * Basic knowledge of systems, utilities, and scripting * Strong problem-solving skills * Excellent client-facing skills * Excellent written and verbal communication skills * Experience with Microsoft Azure including Azure Data Factory (ADF), Databricks, ADLS (Gen2) * Experience with system administration and SFTP * Experience leveraging analytics team tools such as Alteryx or other ETL tools * Experience with data visualization software (e.g. Domo, Datorama) * Experience with SQL programming * Experience automating routine data tasks using various software tools (e.g., Jenkins, Nexus, SonarQube, Rundeck, Task Scheduler)

Posted 2 weeks ago

Apply

4.0 - 7.0 years

5 - 15 Lacs

Bengaluru

Hybrid

Naukri logo

Job Description:- Pyspark Developer / Data Engineer Must have experience on Pyspark. Strong programming experience, Python, Pyspark, Scala is preferred. Experience in designing and implementing CI/CD, Build Management, and Development strategy. Experience with SQL and SQL Analytical functions, experience participating in key business, architectural and technical decisions Proficient in leveraging Spark for distributed data processing and transformation. Skilled in optimizing data pipelines for efficiency and scalability. Experience with real-time data processing and integration. Familiarity with Apache Hadoop ecosystem components. Strong problem-solving abilities in handling large-scale datasets. Ability to collaborate with cross-functional teams and communicate effectively with stakeholders. Primary Skills : PYSPARK+SQL+Databricks

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

R Roles & Responsibilities: Lead and manage end-to-end data pipeline projects leveraging Databricks and Azure Data Factory . Collaborate with cross-functional teams including data engineers, analysts, and business stakeholders to gather requirements and deliver data products. Ensure proper unit testing and system testing of data pipelines to capture all files/transactions. Monitor and troubleshoot data ingestion processes, ensuring accuracy and completeness of the data. Facilitate Agile ceremonies (sprint planning, standups, retrospectives) and ensure the team adheres to Agile best practices. Define, prioritize, and manage product backlogs based on stakeholder inputs and data insights. Drive continuous improvement by evaluating emerging technologies and optimizing data integration workflows. Must-Have Skills: Databricks Minimum 3 years of hands-on experience. Azure Data Factory – Minimum 3 years of development and orchestration experience. CQL (Cassandra Query Language) – Minimum 5 years of strong understanding and practical application. Strong verbal communication skills – Ability to collaborate effectively with technical and non-technical teams. Deep understanding of the Agile development process . Good-to-Have Skills: Experience in Azure Data Lake , Synapse Analytics , or Power BI . Familiarity with DevOps and CI/CD pipelines in Azure. Exposure to Data Governance and Data Quality frameworks . Experience with cloud security best practices related to data handling. Background in Business Intelligence (BI) and reporting systems.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

25 - 37 Lacs

Pune

Work from Office

Naukri logo

Looking for Java Fullstack Developer Required Candidate profile Looking for our client

Posted 3 weeks ago

Apply

10.0 - 15.0 years

25 - 37 Lacs

Pune

Work from Office

Naukri logo

Looking for Java Fullstack Developer Required Candidate profile Looking for our client

Posted 3 weeks ago

Apply

5 - 10 years

0 Lacs

Chennai, Coimbatore, Bengaluru

Hybrid

Naukri logo

Open & Direct Walk-in Drive event | Hexaware technologies - Azure Data Engineer/Architect in Chennai, Tamilnadu on 10th May [Saturday] 2025 - Azure Databricks/ Data factory/ SQL & Pyspark Dear Candidate, I hope this email finds you well. We are thrilled to announce an exciting opportunity for talented professionals like yourself to join our team as an Azure Data Engineer. We are hosting an Open Walk-in Drive in Chennai, Tamilnadu on 10th May [Saturday] 2025, and we believe your skills in Databricks, Data Factory, SQL, and Pyspark align perfectly with what we are seeking. Details of the Walk-in Drive: Date: 10th May [Saturday] 2025 Experience: 5 years to 12 years Time: 9.00 AM to 5 PM Venue: HEXAWARE TECHNOLOGIES H-5, Sipcot It Park, Post, Navalur, Siruseri, Tamil Nadu 603103 Point of Contact: Azhagu Kumaran Mohan/+91-9789518386 Key Skills and Experience: As an Azure Data Engineer, we are looking for candidates who possess expertise in the following: Databricks Data Factory SQL Pyspark/Spark Roles and Responsibilities: As a part of our dynamic team, you will be responsible for: Designing, implementing, and maintaining data pipelines Collaborating with cross-functional teams to understand data requirements. Optimizing and troubleshooting data processes Leveraging Azure data services to build scalable solutions. What to Bring: Updated resume Photo ID, Passport size photo How to Register: To express your interest and confirm your participation, please reply to this email with your updated resume attached. Walk-ins are also welcome on the day of the event. This is an excellent opportunity to showcase your skills, network with industry professionals, and explore the exciting possibilities that await you at Hexaware Technologies. If you have any questions or require further information, please feel free to reach out to me at AzhaguK@hexaware.com - +91-9789518386 We look forward to meeting you and exploring the potential of having you as a valuable member of our team. ********* less than 4 years of total experience will not be Screen selected to attend the interview***********

Posted 1 month ago

Apply

7 - 12 years

25 - 35 Lacs

Hyderabad

Remote

Naukri logo

Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating SAP IS-Auto data, optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, we’d love to hear from you! Key Responsibilities: Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. Integrate SAP IS-Auto : Extract, transform, and load data from SAP IS-Auto into Azure-based data platforms. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. Big Data Processi ng: Work with Azure Databricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. SAP IS-Auto Data Handling: Experience integrating SAP IS-Auto as a data source into data pipelines. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: 1. Experience with CI/CD for data pipelines using Azure DevOps. 2. Knowledge of Kafka/Event Hub for real-time data processing. 3. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 1 month ago

Apply

7 - 12 years

20 - 35 Lacs

Hyderabad

Remote

Naukri logo

Job Title: Databricks Data Modeler Job Summary We are looking for a Data Modeler to design and optimize data models supporting automotive industry analytics and reporting. The ideal candidate will work with SAP ECC as a primary data source, leveraging Databricks and Azure Cloud to design scalable and efficient data architectures. This role involves developing logical and physical data models, ensuring data consistency, and collaborating with data engineers, business analysts, and domain experts to enable high-quality analytics solutions. Key Responsibilities 1. Data Modeling & Architecture: Design and maintain conceptual, logical, and physical data models for structured and unstructured data. 2. SAP ECC Data Integration: Define data structures for extracting, transforming, and integrating SAP ECC data into Azure Databricks. 3. Automotive Domain Modeling: Develop and optimize industry-specific data models covering customer, vehicle, material, and location data. 4. Databricks & Delta Lake Optimization: Design efficient data models for Delta Lake storage and Databricks processing. 5. Performance Tuning: Optimize data structures, indexing, and partitioning strategies for performance and scalability. 6. Metadata & Data Governance: Implement data standards, data lineage tracking, and governance frameworks to maintain data integrity and compliance. 7. Collaboration: Work closely with business stakeholders, data engineers, and data analysts to align models with business needs. 8. Documentation: Create and maintain data dictionaries, entity-relationship diagrams (ERDs), and transformation logic documentation. Skills & Qualifications 1. Data Modeling Expertise: Strong experience in dimensional modeling, 3NF, and hybrid modeling approaches. 2. Automotive Industry Knowledge: Understanding of customer, vehicle, material, and dealership data models. 3. SAP ECC Data Structures: Hands-on experience with SAP ECC tables, business objects, and extraction processes. 4. Azure & Databricks Proficiency: Experience working with Azure Data Lake, Databricks, and Delta Lake for large-scale data processing. 5. SQL & Database Management: Strong skills in SQL, T-SQL, or PL/SQL, with a focus on query optimization and indexing. 6. ETL & Data Integration: Experience collaborating with data engineering teams on data transformation and ingestion processes. 7. Data Governance & Quality: Understanding of data governance principles, lineage tracking, and master data management (MDM). 8. Strong Documentation Skills: Ability to create ER diagrams, data dictionaries, and transformation rules. Preferred Qualifications 1. Experience with data modeling tools such as Erwin, Lucidchart, or DBT. 2. Knowledge of Databricks Unity Catalog and Azure Synapse Analytics. 3. Familiarity with Kafka/Event Hub for real-time data streaming. 4. Exposure to Power BI/Tableau for data visualization and reporting.

Posted 1 month ago

Apply

3 - 6 years

12 - 15 Lacs

Hyderabad

Remote

Naukri logo

Job Title: Data Engineer Job Summary: Are you passionate about building scalable data pipelines, optimizing ETL processes, and designing efficient data models? We are looking for a Databricks Data Engineer to join our team and play a key role in managing and transforming data in Azure cloud environments. In this role, you will work with Azure Data Factory (ADF), Databricks, Python, and SQL to develop robust data ingestion and transformation workflows. Youll also be responsible for integrating, ,optimizing performance, and ensuring data quality & governance. If you have strong experience in big data processing, distributed computing (Spark), and data modeling, we’d love to hear from you! Key Responsibilities: 1. Develop & Optimize ETL Pipelines : Build robust and scalable data pipelines using ADF, Databricks, and Python for data ingestion, transformation, and loading. 2. Data Modeling & Systematic Layer Modeling : Design logical, physical, and systematic data models for structured and unstructured data. 3. Database Management : Develop and optimize SQL queries, stored procedures, and indexing strategies to enhance performance. 4. Big Data Processi ng: Work with Azure Data bricks for distributed computing, Spark for large-scale processing, and Delta Lake for optimized storage. 5. Data Quality & Governance : Implement data validation, lineage tracking, and security measures for high-quality, compliant data. 6. Collaboration : Work closely with business analysts, data scientists, and DevOps teams to ensure data availability and usability. 7. Testing and Debugging : Write unit tests and perform debugging to ensure the Implementation is robust and error-free. Conduct performance optimization and security audits. Required Skills and Qualifications: Azure Cloud Expertise: Strong experience in Azure Data Factory (ADF), Databricks, and Azure Synapse. Programming: Proficiency in Python for data processing, automation, and scripting. SQL & Database Skills: Advanced knowledge of SQL, T-SQL, or PL/SQL for data manipulation. Data Modeling: Hands-on experience in dimensional modeling, systematic layer modeling, and entity-relationship modeling. Big Data Frameworks: Strong understanding of Apache Spark, Delta Lake, and distributed computing. Performance Optimization: Expertise in query optimization, indexing, and performance tuning. Data Governance & Security: Knowledge of RBAC, encryption, and data privacy standards. Preferred Qualifications: Experience with CI/CD for data pipelines using Azure DevOps. Knowledge of Kafka/Event Hub for real-time data processing. Experience with Power BI/Tableau for data visualization (not mandatory but a plus).

Posted 1 month ago

Apply

5 - 8 years

5 - 9 Lacs

Bengaluru

Hybrid

Naukri logo

Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards

Posted 1 month ago

Apply

5 - 8 years

8 - 18 Lacs

Bengaluru

Remote

Naukri logo

Databricks SQL Engineer with a Pharma or Life Sciences background to join our offshore Data Engineering team. This role focuses on building efficient, scalable SQL-based data models and pipelines using Databricks SQL, Spark SQL, and Delta Lake.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies