Jobs
Interviews

1265 Azure Databricks Jobs - Page 14

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: PySpark. Experience:5-8 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

4 - 7 Lacs

Pune

Work from Office

JC-67103 Band -B2,B3 Location-Chennai, Coimbatore, Bangalore, pun Key skills Azure Data Factory (primary) , Azure Data bricks Spark (PySpark, SQL Experience - 7 to 10 Years Must-have skills Cloud certified in one of these categories Azure Data Engineer Azure Data Factory , Azure Data bricks Spark (PySpark or scala), SQL, DATA Ingestion, Curation . Semantic Modelling/ Optimization of data model to work within Rahona Experience in Azure ingestion from on-prem source, e.g. mainframe, SQL server, Oracle. Experience in Sqoop / Hadoop Microsoft Excel (for metadata files with requirements for ingestion) Any other certificate in Azure/AWS/GCP and data engineering hands-on experience in cloud Strong Programming skills with at least one of Python, Scala or Java Strong SQL skills ( T-SQL or PL-SQL) Data files movement via mailbox Source-code versioning/promotion tools, e.g. Git/Jenkins Orchestration tools, e.g. Autosys, Oozie Source-code versioning with Git. Nice-to-have skills Experience working with mainframe files Experience in Agile environment, JIRA/Confluence tools. Mandatory Skills: DataBricks - Data Engineering. Experience: 5-8 Years.

Posted 3 weeks ago

Apply

10.0 - 14.0 years

25 - 30 Lacs

Noida

Work from Office

JD for Data Architect: Preferred Experience Level: 10-14 years Educational Requirements: B.Tech/MCA Certifications (if any): DP 203/DP 600 Required Skills & Qualifications: • Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. Experience: At least 8+ years of experience in leading and managing Data & AI projects, with a proven track record of successful project delivery. • MSBI Expertise: Strong knowledge and experience with the Microsoft Business Intelligence (MSBI) stack, including SSIS, SSAS, and SSRS. • Azure Data Services Expertise: Extensive hands-on experience with Azure Data Factory, Azure SQL Database, Azure Databricks, Azure Synapse Analytics, and Azure Data Fabric. • Architectural Design: Ability to design and implement complex data architecture solutions that are scalable, efficient, and secure. Key Responsibilities: • Lead the Design & Delivery: Take ownership of end-to-end data architecture delivery, ensuring solutions are robust, scalable, and meet business requirements while maintaining high-quality standards. • Team Leadership: Lead and manage a team of data engineers, data scientists, and business analysts to execute Data & BI projects successfully. • Collaboration: Work closely with cross-functional teams, including business stakeholders, to gather and analyze requirements, define project scope, and set clear objectives for project milestones. • Data Architecture Design: Architect and implement innovative, high performance data solutions using MSBI (SSIS, SSAS, SSRS) and Azure data services such as Azure Data Factory, Azure SQL Database, Azure Databricks, Azure Synapse Analytics, and Azure Data Fabric

Posted 3 weeks ago

Apply

5.0 - 10.0 years

8 - 18 Lacs

Pune

Work from Office

Job Title: Senior Databricks Engineer / Azure Databricks engineer Location: Baner, Pune, Maharashtra Experience: 5+ Years Availability: Immediate Joiner or within 15 Days Work Type: Full-Time, Permanent Company Overview: Newscape is a fast-growing digital services provider focused on transforming the healthcare ecosystem through advanced data and cloud solutions. We help clients modernize legacy systems, enabling them to stay agile in a rapidly changing digital landscape. Our specialization lies in delivering scalable, intelligent, and user-centric healthcare technology solutions. Position Summary: We are seeking a seasoned Senior Databricks Engineer to join our data engineering team in Pune. The ideal candidate is expected to bring deep expertise in Databricks, Spark technologies, Delta Lake, and cloud platforms (Azure/AWS/GCP), and have a passion for building highly scalable data pipelines. You will play a key role in implementing Lakehouse architecture, ensuring data quality, and integrating robust CI/CD and orchestration pipelines. Key Responsibilities: Design, develop, and optimize large-scale data pipelines using Databricks , PySpark , and Spark SQL . Implement Lakehouse architecture using Delta Lake and follow medallion architecture principles (Bronze, Silver, Gold layers). Develop and manage Databricks components: Jobs , Delta Live Tables (DLT) , Repos , Unity Catalog , and Workflows . Collaborate with data architects , data scientists , and business stakeholders to deliver scalable and maintainable data solutions. Ensure data governance , security , and compliance using tools like Unity Catalog and Azure Purview . Build CI/CD pipelines for data projects using Azure DevOps , GitHub Actions , or equivalent. Schedule and monitor workflows using Airflow , Azure Data Factory , or Databricks Workflows . Perform data modeling, transformation, and loading using strong SQL and data warehousing concepts . Translate complex business requirements into technical implementations with clear documentation and stakeholder alignment. Provide mentorship and technical guidance to junior team members. Required Qualifications: 6+ years of experience in data engineering , including 4 + years on Databricks . Expert-level proficiency in Databricks Workspace , DLT , Jobs , Repos , and Unity Catalog . Strong hands-on knowledge of PySpark , Spark SQL , and optionally Scala . Experience with one or more major cloud platforms: Azure , AWS , or GCP (Azure preferred). Solid understanding and hands-on experience with Delta Lake , Lakehouse , and medallion architecture . Proven experience with CI/CD tools such as Azure DevOps , GitHub , Bitbucket , etc. Familiarity with orchestration tools like Apache Airflow , ADF , or Databricks Workflows . Understanding of data governance , lineage , and metadata management practices. Strong communication and collaboration skills, with the ability to interact effectively with technical and non-technical stakeholders. Nice to Have: Experience in the healthcare domain and understanding of healthcare data standards. Exposure to machine learning workflows or support for data science teams . Certifications in Databricks , Azure , or other cloud platforms. What We Offer: Opportunity to work on cutting-edge technologies and transformative healthcare projects. Collaborative work environment with a focus on learning and innovation. Competitive salary and performance-based growth. Work-life balance with flexible engagement for high performers. Exposure to global healthcare leaders and next-gen data platforms. Thanks & regards, Swapnil Supe HR Executive +91 8233829595 swapnil.supe@newscapeconsulting.com

Posted 3 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Hyderabad

Work from Office

We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

Location : Remote Experience : 10years+ Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 3 weeks ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Hyderabad

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

9 - 13 Lacs

Hyderabad

Work from Office

The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Required Skills: 6-12 years of experience in Azure or AWS. Please send profiles to payal.kumari@nam-it.com Regards, Payal Kumari Senior Executive Staffing NAM Info Pvt Ltd, 29/2B-01, 1st Floor, K.R. Road, Banashankari 2nd Stage, Bangalore - 560070. Email – payal.kumari@nam-it.com Website - www.nam-it.com USA | CANADA | INDIA

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 27 Lacs

Hyderabad, Bengaluru

Hybrid

Databricks/PySpark offshore Developer Looking for an offshore Lead Databricks/PySpark Developer who is willing to learn new technologies if needed and able to work with a team. Essential Job Functions: Design and development of data ingestion pipelines (Databricks background preferred). Performance tune and optimize the data bricks jobs Evaluated new features and refractors existing code Mentor junior developers and makes sure all patterns are documented Perform data migration and conversion activities. Develop and integrate software applications using suitable development methodologies and standards, applying standard architectural patterns, taking into account critical performance characteristics and security measures. Collaborate with Business Analysts, Architects and Senior Developers to establish the physical application framework (e.g. Libraries, modules, execution environments). Perform end to end automation of ETL process for various datasets that are being ingested into the big data platform. Maintain and support the application. Must be willing to flex work hours accordingly to support application launches and manage production outages if necessary Ensures to understand the requirements thoroughly and in detail and identify gaps in requirements Ensures that detailed unit testing is done, handles negative scenarios and document the same Work with QA and automation team. Works on best practices and documenting the process code merges and releases (Bitbucket) Works with architect and manager on designs and best practices Good data analysis skills Other Responsibilities: * Safeguard the companys assets. * Adhere to the company’s compliance program. * Maintain comprehensive knowledge of industry standards, methodologies, processes, and best practices. * Maintain a focus on customer-service, efficiency, quality, and growth. * Collaborating with additional team members * Other duties as assigned. Minimum Qualifications and Job Requirements: * Must be a team player. * Must have following SCALA SQL Spark/Spark Streaming Big Data Tool Set Linux Python/PySpark Kafka * Experience collaborating with dev team, project managers, and engineers. * Excellent communication and teamwork skills

Posted 3 weeks ago

Apply

3.0 - 8.0 years

10 - 13 Lacs

Bengaluru

Hybrid

CAN Share Resume to : sowmya.v@acesoftlabs.com Position Data Engineer Experience Range: 3-8years Key Responsibilities Data Pipeline Development Design, build, and optimize scalable data pipelines Ingest, transform, and load data from multiple sources Use Azure Databricks , Snowflake , and DBT for pipeline orchestration Data Architecture & Modeling Develop and manage data models within Snowflake Ensure efficient data organization, accessibility, and quality Data Transformation Implement standardized data transformations using DBT Performance Optimization Monitor pipeline performance Troubleshoot and resolve issues Optimize workflows for efficiency Collaboration Work with data scientists, analysts, and business stakeholders Ensure access to reliable, well-structured data for analytics and reporting Required Qualifications Bachelors degree in Computer Science, Data Engineering, or related field Proficiency in Azure Databricks for data processing Experience with Snowflake as a data warehouse platform Hands-on expertise with DBT for data transformations Strong SQL skills and understanding of data modeling principles Ability to troubleshoot and optimize complex data workflows Preferred / Additional Skills Experience in MS Azure , Snowflake , DBT , and Big Data (Hadoop ecosystem) Knowledge of Hadoop architecture and storage frameworks Hands-on with Hadoop, Spark, Hive , and Databricks Experience with Data Lake solutions using Scala and Python Experience with Azure Data Factory (ADF) for orchestration Familiarity with CI/CD tools such as Jenkins , Azure DevOps , and GitHub Strong programming skills in Python or Scala

Posted 3 weeks ago

Apply

10.0 - 16.0 years

18 - 30 Lacs

Chennai

Work from Office

Hi, We have vacancy for Sr. Data engineer. Location: Chennai , Experience: 10+ years Salary: maximum up to 30lpa We are seeking an experienced Senior Data Engineer to join our dynamic team. The ideal candidate will be responsible for Design and implement the data engineering framework. Responsibilities Strong Skill in Big Query, GCP Cloud Data Fusion (for ETL/ELT) and PowerBI. Need to have strong skill in Data Pipelines Able to work with Power BI and Power BI Reporting Design and implement the data engineering framework and data pipelines using Databricks and Azure Data Factory. Document the high-level design components of the Databricks data pipeline framework. Evaluate and document the current dependencies on the existing DEI toolset and agree a migration plan. Lead on the Design and implementation an MVP Databricks framework. Document and agree an aligned set of standards to support the implementation of a candidate pipeline under the new framework. Support integrating a test automation approach to the Databricks framework in conjunction with the test engineering function to support CI/CD and automated testing. Support the development teams capability building by establishing an L&D and knowledge transition approach. Support the implementation of data pipelines against the new framework in line with the agreed migration plan. Ensure data quality management including profiling, cleansing and deduplication to support build of data products for clients Skill Set Experience working in Azure Cloud using Azure SQL, Azure Databricks, Azure Data Lake, Delta Lake, and Azure DevOps. Proficient in Python, Pyspark and SQL coding skills. Profiling data and data modelling experience on large data transformation projects creating data products and data pipelines. Creating data management frameworks and data pipelines which are metadata and business rules driven using Databricks. Experience of reviewing datasets for data products in terms of data quality management and populating data schemas set by Data Modellers Experience with data profiling, data quality management and data cleansing tools. Immediate joining or short notice is required. Pls Call varsha 7200847046 for more info Thanks, varsha 7200847046

Posted 3 weeks ago

Apply

4.0 - 7.0 years

5 - 15 Lacs

Bengaluru

Work from Office

Volvo Group is looking for Azure Data Engineers! Are you interested in being involved in the biggest, data driven transformation of transport solutions in history? We are in the middle of our digitalization journey, and you are welcome to join and help us drive analytics and artificial intelligence initiatives which enable new business models. We are looking for Data Engineers 5+ years of relevant experience with possible DevOps skills to develop and optimize data pipelines and work together with Data Scientists and other experts who focus on generating business value out of data. You will work with the latest technology, in agile product teams which discover data to create innovative services. These are challenges waiting for you: You ingest and process data coming from our business application, factory equipment (Industrial Internet of Things), products (Vehicle Logged Data), as well as external sources. You will organize both structured and unstructured data to enable development of new services, such as preventive maintenance of vehicle, fuel consumption and battery lifetime optimization and many more innovative services which are changing our business models. Your typical day at work will be filled with the following activities: Build complex data pipelines in Microsoft Azure Optimize existing data pipelines Support in building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources to Azure Work closely with data analysts and data scientists to provide required data structures and enable new insights Able to provision resources on Azure Keep the data secure across the end-to-end solutions Evaluate and improve existing data analytics solutions Develop your competences, learn new tools and ways of working This is you: We are looking for experienced data engineer with 4+ years of experience and relevant education background You have good hands-on skills on Azure Data Analytics, Spark, Databricks (Cost Optimization Techniques, Cluster Optimization, Performance Tuning) Youhavestronghands-on ETLskillsandexperience in building data pipelines using Databricks (with good knowledge of spark) and orchestrating Data workloads on Azure Data Factory Experience in handling real-time streaming/transactional data preferably Spark-Streaming, Kafka/EventHubs/EventGrid/Stream Analytics/Service Bus etc Youhavesignificantexperiencein processing data in scripting languages (Python, Pyspark, Scala) Working with Git and following Git workflow (Branching, Managing conflicts, etc.) Databases(likeSQLserverorNetezza)donthaveanysecretforyou You are comfortable with DevOps (or DevSecOps) and are able to provision resources in Azure Youarecomfortableworkingin adiverse,complex,and fast-changinglandscapeof datasources You communicate fluently in English.Youareproactiveproblemsolverwithinnovativethinkinganda strongteamplayer Extras good to have: Experience in AGILE project methodology and practice with ARM Templates/Bicep on the DevOps front Understanding the DevOps Architecture (Agent Pools, Accounts, Var Groups, Templating, etc.) Azure Networking model (public and private connectivity, VNET injection, etc.) Familiarity with Microsoft Power BI or other data visualization tools such as Qlik, SAP Business Objects Knowledge of other services and tools used to ETL workflows like Informatica IICS Experience with business intelligence platforms, such as Microsoft SSIS/SSAS/SSRS, Teradata, IBM Netezza Familiarity with database management systems, online analytical processing (OLAP) This is what we can offer you: Being a part of a product team focusing both on maintenance and innovations A steep learning curve with a state of the art and individualized training program Collaborative environment Work-life balance: we make sure you enjoy quality time away from work Remote work opportunities, employment contracts, and flexible working hours Clear career path & extensive development opportunities such as mentoring or coaching programs Private healthcare (available also onsite) Participation in international projects and different pieces of training Unlimited access to learning including Azure academy, Pluralsight, and many more Work Mode- Work from office- all 5 days

Posted 3 weeks ago

Apply

4.0 - 9.0 years

20 - 32 Lacs

Pune, Chennai

Work from Office

Key Responsibilities: Design, develop, and optimize data pipelines using Databricks (PySpark) for batch and real-time data processing. Implement CDC (Change Data Capture) and Delta Live Tables/Autoloader to support near-real-time ingestion. Integrate various structured and semi-structured data sources using ADF, ADLS, and Kafka (Confluent). Develop CI/CD pipelines for data engineering workflows using GitHub Actions or Azure DevOps. Write efficient and reusable SQL and Python code for data transformations and validations. Ensure data quality, lineage, governance, and security across all ingestion and transformation layers. Collaborate closely with business analysts, data scientists, and data stewards to support use cases in risk, finance, compliance, and operations. Participate in code reviews, architectural discussions, and documentation efforts. Required Skills & Qualifications: Strong proficiency in SQL, Python, and PySpark. Proven experience with Azure Databricks, including notebooks, jobs, clusters, and Delta Lake. Experience with Azure Data Lake Storage (ADLS Gen2) and Azure Data Factory (ADF). Hands-on with Confluent Kafka for streaming data integration. Strong understanding of Autoloader, CDC mechanisms, and Delta Lake-based architecture. Experience implementing CI/CD pipelines using GitHub and/or Azure DevOps. Knowledge of data modeling, data warehousing, and data security best practices. Exposure to regulatory and risk data use cases in the banking/financial sector is a strong plus. Preferred Qualifications: Azure certifications (e.g., Azure Data Engineer Associate). Experience with tools such as Delta Live Tables, Unity Catalog, and Lakehouse architecture. Familiarity with business glossaries, data lineage tools, and data governance frameworks. Understanding of financial data including GL, loan, customer, transaction, or market risk domains.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

0 - 20 Lacs

Hyderabad, Bengaluru

Work from Office

TECH MAHINDRA is hiring for Data Engineer role. Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to integrate various data sources into a centralized repository. Collaborate with cross-functional teams to gather requirements for new data models and implement changes to existing ones using PySpark. Develop continuous integration/continuous deployment (CI/CD) pipelines using DataBricks to ensure seamless delivery of high-quality data solutions. Troubleshoot complex issues related to data processing, modeling, and visualization in real-time. Job Requirements : experience in designing and developing large-scale data engineering projects on Azure Databricks platform. Strong expertise in ADF, CI/CD, DataModeling, PySpark, and other relevant technologies. Experience working with big-data technologies such as Hadoop ecosystem components including Hive & Pig.

Posted 3 weeks ago

Apply

15.0 - 20.0 years

50 - 55 Lacs

Bengaluru

Work from Office

Mode: Contract As an Azure Data Architect, you will: Lead architectural design and migration strategies, especially from Oracle to Azure Data Lake Architect and build end-to-end data pipelines leveraging Databricks, Spark, and Delta Lake Design secure, scalable data solutions integrating ADF, SQL Data Warehouse, and on-prem/cloud systems Optimize cloud resource usage and pipeline performance Set up CI/CD pipelines with Azure DevOps Mentor team members and align architecture with business needs Qualifications: 10-15 years in Data Engineering/Architecture roles Extensive hands-on with: Databricks, Azure Data Factory, Azure SQL Data Warehouse Data integration, migration, cluster configuration, and performance tuning Azure DevOps and cloud monitoring tools Excellent interpersonal and stakeholder management skills

Posted 3 weeks ago

Apply

4.0 - 6.0 years

4 - 9 Lacs

Pune

Remote

Azure Data Engineer The Data Engineer builds and maintains data pipelines and infrastructure within Microsoft Fabric, enabling a seamless migration from Oracle/Informatica. This offshore role requires deep expertise in data engineering techniques to support enterprise data needs. The successful candidate will excel in creating scalable data solutions. Responsibilities Develop and maintain data pipelines for Microsoft Fabric, handling ETL processes from Oracle/Informatica. Ensure seamless data flow, integrity, and performance in the new platform. Collaborate with the Offshore Data Modeler and Onsite Data Modernization Architect to align with modernization goals. Optimize code and queries for performance using tools like PySpark and SQL. Conduct unit testing and debugging to ensure robust pipeline functionality. Report technical progress and issues to the Offshore Project Manager. Skills Bachelors degree in computer science, data engineering, or a related field. 4+ years of data engineering experience with PySpark, Python, and SQL. Strong knowledge of Microsoft Fabric, Azure services (e.g., Data Lake, Synapse), and ETL processes. Experience with code versioning (e.g., Git) and optimization techniques. Ability to refactor legacy code and write unit tests for reliability. Problem-solving skills with a focus on scalability and performance.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

12 - 22 Lacs

Chennai

Hybrid

About Company: Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About the Role: We are seeking a Senior Data Engineer to join our growing cloud data team. In this role, you will design and implement scalable data pipelines and ETL processes using Azure Databricks , Azure Data Factory , PySpark , and Spark SQL . Youll work with cross-functional teams to develop high-quality, secure, and efficient data solutions in a data lakehouse architecture on Azure. Key Responsibilities: Design, develop, and optimize scalable data pipelines using Databricks , ADF , PySpark , Spark SQL , and Python Build robust ETL workflows to transform and load data into a lakehouse architecture on Azure Ensure data quality, security, and compliance with data governance and privacy standards Collaborate with stakeholders to gather business requirements and deliver technical data solutions Create and maintain technical documentation for workflows, architecture, and data models Work within an Agile environment and track tasks using tools like Azure DevOps Required Skills & Experience: 8+ years of experience in data engineering and enterprise data platform development Proven expertise in Azure Databricks , Azure Data Factory , PySpark , and Spark SQL Strong understanding of Data Warehouses , Data Marts , and Operational Data Stores Proficient in writing complex SQL / PL-SQL queries and understanding data models and data lineage Knowledge of data management best practices: data quality , lineage , metadata , reference/master data Experience working in Agile teams with tools like Azure DevOps Strong problem-solving skills, attention to detail, and the ability to multi-task effectively Excellent communication skills for interacting with both technical and business teams Benefits and Perks: Opportunity to work with leading global clients Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities

Posted 3 weeks ago

Apply

12.0 - 15.0 years

40 - 45 Lacs

New Delhi, Pune

Hybrid

Role & responsibilities Experience: 10-14 years Key skills: Azure Data Architect sql data modeling dimension data modeling databricks or synapse

Posted 3 weeks ago

Apply

3.0 - 7.0 years

6 - 15 Lacs

Bengaluru

Remote

Job Responsibilities: We are looking for a passionate and technically skilled Cloud Support Engineer who brings hands-on experience in Microsoft Azure and cloud automation, with a strong grasp of programming and data engineering tools. Ideal for someone who bridges the gap between support and development. Looking for immediate joiners Need to overlap little of Canada working hours Key Responsibilities: Provide Level 2/3 support for Azure-based environments, including VMs, Logic Apps, Azure Functions, and Storage Accounts. Design and maintain automated cloud workflows using Azure Data Factory (ADF) and Databricks. Debug and resolve issues in cloud data pipelines and integration solutions. Collaborate with development teams to troubleshoot Python, Java, or PowerShell-based applications. Contribute to deploying scalable cloud infrastructure and full-stack solutions. Document and optimize cloud support procedures and technical workflows. Actively contribute to continuous improvement in cloud monitoring, performance, and cost optimization. Technical Skills Required: Cloud Platforms: Strong experience with Microsoft Azure, including VMs, ADF, Logic Apps, Azure Functions. Exposure to AWS services like Lambda, S3 is a plus. Programming Languages: Python (primary), basic Java, PowerShell scripting. DevOps & Automation: Familiarity with deployment automation tools and CI/CD practices. Data Engineering: Exposure to Databricks, Azure Data Factory, DynamoDB, and relational databases like MySQL. OS & Platforms: Linux environments and scripting. Monitoring & Debugging: Skilled in identifying issues across cloud environments using logs, alerts, and diagnostics

Posted 3 weeks ago

Apply

6.0 - 8.0 years

15 - 20 Lacs

Chennai

Work from Office

Senior Data Engineer: Job Title: Senior Data Engineer Experience : 6 to 8 Years Location: Chennai Job Description: Movate is seeking a highly skilled Senior Data Engineer to lead the development of scalable, modular, and high-performance data pipelines. You will work closely with cross-functional teams to support data integration, transformation, and delivery for analytics and business intelligence. Key Responsibilities: Design and maintain ETL/ELT pipelines using Apache Airflow , Azure Databricks , and Azure Data Factory Build scalable data infrastructure and optimize data workflows Ensure data quality, security, and governance across platforms Collaborate with data scientists and BI developers to support analytics and reporting Monitor and troubleshoot data pipelines for reliability and performance Document data processes and workflows for knowledge sharing Technical Skills Required: Strong proficiency in Python (Pandas, NumPy, REST APIs) Advanced SQL skills (joins, CTEs, performance tuning) Experience with Databricks , Apache Airflow , and Azure Cloud Services Knowledge of SparkSQL , PySpark , and containerization using Docker Familiarity with data lake vs data warehouse architectures Experience in data security , encryption , and access provisioning Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, Engineering, or related field Excellent problem-solving and communication skills Ability to work independently and manage end-to-end delivery Comfortable in agile development environments EEO Statement: Movate provides equal opportunity in all our employment practices to all qualified employees and applicants without regard race, color, religion, sex (including gender identity, sexual orientation, and pregnancy), national origin, age, disability or genetic information and other characteristics that are protected by applicable law.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Bangalore/ Bengaluru

Hybrid

Role & Responsibilities: The candidate will have to leverage strong collaboration skills and the ability to independently develop and design highly complex data sets and ETL processes to develop a data warehouse and to ask the right questions. You'd be engaged in a fast-paced learning environment and will be solving problems for the largest organizations in the world mostly Fortune 500. The candidate will also work closely with internal business teams and clients to work on various kinds of Data Engineering related problems like the development of a data warehouse, Advanced Stored Procedures, ETL pipelines, Reporting, Data Governance, and BI development. Basic Qualifications: Bachelor's degree in Computer Science, Engineering, Operations Research, Math, Economics or related discipline Strong SQL, Python, PySpark ETL development, Azure Data Factory and PowerBI knowledge and Hands-on experience Proficient in understanding the Business Requirement and converting it into process flows and codes(Special preference for SQL based stored procedures) Develop and design data architecture and frameworks for optimal performance and response time Strong Analytical skills and the ability to start from ambiguous problem statements, identify and access relevant data, make appropriate assumptions, perform insightful analysis, and draw conclusions relevant to the business problem Excellent communication skills to communicate efficiently (written and spoken) in English. Demonstrated ability to communicate complex technical problems in simple plain stories. Ability to present information professionally & concisely with supporting data. Ability to work effectively & independently in a fast-paced environment with tight deadlines. Ability to engage with cross-functional teams for implementation of project and program requirements. 5+ years of hands-on experience as a Data Engineer or tech lead roles. 5+ years of experience in data engineering on the Azure cloud, highly proficient in the Azure ecosystem and its services."

Posted 3 weeks ago

Apply

5.0 - 10.0 years

7 - 17 Lacs

Ahmedabad

Work from Office

Greetings from Dev Information Technology Ltd ! Company Details: We are trusted as one of the leading IT enabled services provider, having a remarkable track record of consistently delivering workable and robust solutions. This becomes possible as we adopt continual innovation and remain committed to quality, implement and refine processes and leverage technological prowess. With the best software and hardware environments coupled with state-of the-art communication facilities; our offices are fully equipped to work as virtual extensions of clients environment, providing 247 services. Founded in 1997 in Ahmedabad, India one of the fastest growing metros of India Branch offices in India, USA and Canada Multi-million US$ turnover with CAGR of 20% 1000+ certified and skilled professionals serving more than 300+ clients globally Offering end-to-end solutions to meet IT and ICT needs of clients | Website : http://www.devitpl.com/ Profile Summary Designation: Project Lead (Data) Experience: 5+ Years Work Location: Ahmedabad KEY RESPONSIBILITIES Translate business needs to technical specifications Design, build and deploy BI solutions (e.g. reporting tools) Maintain and support data analytics platforms Collaborate with teams to integrate systems Develop and execute database queries and conduct analyses Create visualizations and reports for requested projects Develop and update technical documentation SKILLS AND EXPERIENCE : 5+ years of experience in designing and implementing reports/dashboards, ETL and warehouse. 3+ years of direct management/supervisory experience. In-depth understanding of Data warehousing and database concepts Understands the lifecycle for report development work In Depth understanding of BI fundamentals. Experience in Microsoft SQL Server, SSIS, SSRS, Azure Data Factory, and Azure Synapse. Experts in Power BI Define all aspects of software development from appropriate technology and workflow to coding standards Communicate successfully all concepts and guidelines to development team. Provide technical guidance and coaching to reporting team Oversee progress of report/dashboard development to ensure consistency with DW/RDBMS design. Engage with stakeholders to identify business KPIs with the correct tools/mechanism to record them and present actionable insights through reports and dashboards Proven analytical and problem-solving abilities. Communication - Excellent interpersonal and written communication skills QUALIFICATIONS AND CERTIFICATIONS: BE/ MCA/ B.Tech/ M.Tech Perks & Benefits: Health Insurance Employee rewards and recognition Flexible working hours Gratuity Professional Development Food Coupons Comprehensive Leave Benefits

Posted 3 weeks ago

Apply

5.0 - 8.0 years

2 - 7 Lacs

Ahmedabad

Work from Office

KEY RESPONSIBILITIES Translate business needs to technical specifications Design, build and deploy BI solutions (e.g. reporting tools) Maintain and support data analytics platforms (e.g. MicroStrategy) Create tools to store data (e.g. OLAP cubes) Collaborate with teams to integrate systems Develop and execute database queries and conduct analyses Create visualizations and reports for requested projects Design, implement, and maintain databases to ensure optimal data storage. Work closely with stakeholders to understand business requirements and translate them into technical solutions. MANDATORY SKILLS Proficiency in Power BI, Azure Data Factory, Azure Synapse, SSIS Solid understanding of Data Warehousing. Experience with database design and management (SQL). OPTIONAL SKILLS Azure Data bricks, AWS Glue ,SSAS, Azure Analysis Service.

Posted 3 weeks ago

Apply

7.0 - 10.0 years

30 - 37 Lacs

Thane

Work from Office

Collaborate with Workforce Management and Operational teams to analyze data formats and structures from in-house Workforce Management systems or client CRM/WFM systems Design and implement a scalable data warehouse or data lake with appropriate data pipelines. Ensure automated processing of operational data for generating standardized and customizable KPI matrices and dashboards. Lead the creation of systems that allow flexibility for client-specific rules and calculations. Integrate AI/ML solutions for predictive analytics and business insights. Establish data governance and ensure compliance with security and regulatory standards. Monitor data quality and optimize performance across the data ecosystem. Act as a technical leader, mentoring and managing a small team of data engineers. Provide regular updates to stakeholders and collaborate on defining long-term data strategies. About the Candidate Education: Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. Experience: 8+ years of IT experience with at least 5 years in data engineering or architecture roles. Hands-on experience with data pipeline tools and cloud platforms (e.g., Azure Data Factory, Azure Databricks, AWS Glue). Proficient in SQL, Python, and ETL processes. Experience in designing data warehouses, data lakes, and creating dashboards using BI tools like Power BI or Tableau. Familiarity with AI/ML technologies for analytics and predictive modeling. Strong understanding of data lifecycle management and compliance requirements. Mandatory Skills: Expertise in managing multi-source data integrations. Knowledge of industry-standard KPI frameworks and dashboarding techniques. Ability to handle client-specific customization requirements. Preferred Skills: Experience in BPO or IT Outsourcing environments. Knowledge of workforce management tools and CRM integrations. Hands-on experience with AI technologies and their applications in data analytics. Familiarity with Agile/Scrum methodologies. Soft Skills: Strong analytical and problem-solving capabilities. Excellent communication and stakeholder management skills. Ability to thrive in a fast-paced, dynamic environment. Leadership qualities to mentor and guide a team effectively.Role & responsibilities

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies