Jobs
Interviews

1265 Azure Databricks Jobs - Page 16

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 5.0 years

4 - 6 Lacs

Chennai, Bengaluru

Work from Office

Job Overview: We are seeking a highly skilled Technical Data Analyst for a remote contract position (6 to 12 months) to help build a single source of truth for our high-volume direct-to-consumer accounting and financial data warehouse. You will work closely with Finance & Accounting teams and play a pivotal role in dashboard creation, data transformation, and migration from Snowflake to Databricks. Key Responsibilities: 1. Data Analysis & Reporting Develop month-end accounting and tax dashboards using SQL in Snowflake (Snowsight) Migrate and transition reports/dashboards to Databricks Gather, analyze, and transform business requirements from finance/accounting stakeholders into data products 2. Data Transformation & Aggregation Build transformation pipelines in Databricks to support balance sheet look-forward views Maintain data accuracy and consistency throughout the Snowflake Databricks migration Partner with Data Engineering to optimize pipeline performance 3. ERP & Data Integration Support integration of financial data with NetSuite ERP Validate transformed data to ensure correct ingestion and mapping into ERP systems 4. Ingestion & Data Ops Work with Fivetran for ingestion and resolve any pipeline or data accuracy issues Monitor data workflows and collaborate with engineering teams on troubleshooting Required Skills & Qualifications: 5+ years of experience as a Data Analyst (preferably in Finance/Accounting domain) Strong in SQL, with proven experience in Snowflake and Databricks Experience in building financial dashboards (month-end close, tax reporting, balance sheets) Understanding of financial/accounting data: GL, journal entries, balance sheet, income statements Familiarity with Fivetran or similar data ingestion tools Experience with data transformation in a cloud environment Strong communication and stakeholder management skills Nice to have: Experience working with NetSuite ERP Location: Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 3 weeks ago

Apply

3.0 - 8.0 years

9 - 14 Lacs

Chennai

Remote

Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 3 weeks ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Kolkata

Work from Office

As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Bengaluru

Work from Office

About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

6.0 - 7.0 years

9 - 13 Lacs

Chennai

Work from Office

The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

9 - 14 Lacs

Kolkata

Work from Office

Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 3 weeks ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Bengaluru

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 3 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Mumbai

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

6.0 - 10.0 years

9 - 13 Lacs

Kolkata

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

9 - 18 Lacs

Bengaluru

Work from Office

implementing ETL pipelines, data warehousing solutions and BI reporting- Cloud experience in Azure Data bricks, Azure data factory Pipelines, Azure function, Logic apps, Azure DevOps, and SQL. Experience in Data Warehouse concepts Health insurance Provident fund

Posted 3 weeks ago

Apply

2.0 - 5.0 years

5 - 9 Lacs

Gurugram

Work from Office

Educational Bachelor of Engineering Service Line Data & Analytics Unit Responsibilities A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you!If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Technical and Professional : Primary skills:Technology-Machine Learning-Python Preferred Skills: Technology-Machine Learning-Python

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Hyderabad

Work from Office

We are looking for excellent Azure Data engineering Trainers to join us in a part-time classroom capacity.The ideal candidate should have solid hands-on experience with Azure Data Engineering technologies.Mandatory to have training related experience

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Coimbatore

Hybrid

Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 1. 5-12 Years of in Big Data & Data related technology experience 2. Expert level understanding of distributed computing principles 3. Expert level knowledge and experience in Apache Spark 4. Hands on programming with Python 5. Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop 6. Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming 7. Experience with messaging systems, such as Kafka or RabbitMQ 8. Good understanding of Big Data querying tools, such as Hive, and Impala 9. Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files 10. Good understanding of SQL queries, joins, stored procedures, relational schemas 11. Experience with NoSQL databases, such as HBase, Cassandra, MongoDB 12. Knowledge of ETL techniques and frameworks 13. Performance tuning of Spark Jobs 14. Experience with native Cloud data services AWS or AZURE Databricks, GCP. 15. Ability to lead a team efficiently 16. Experience with designing and implementing Big data solutions 17. Practitioner of AGILE methodology

Posted 3 weeks ago

Apply

5.0 - 8.0 years

11 - 16 Lacs

Pune

Work from Office

We looking for potential candidate for our APAC based Asset manager. Details provided below Job description Release Manager The release manager will oversee the planning, scheduling, and controlling of software releases to ensure the smooth and coordinated deployment of new features, bug fixes, and updates across various applications and systems. The individual will collaborate with multiple stakeholders, including developers, QA teams, and Infrastructure team, to manage the release lifecycle and deliver high-quality software efficiently. Key responsibilities: Release planning Define and manage the release calendar, ensuring release windows and cycles are well coordinated and align with project milestones Work closely with project management teams to ensure that release processes are integrated into the overall project planning Weekly release planning meetings where the next production release is discussed including any preparations that are needed as well as prioritising and discussion release candidates for upcoming SIT and UAT releases Ensure confluence documentation for is prepared for each production release. Facilitate release readiness meetings to discuss the progress on release preparations for upcoming production releases Liaise with developers and QA teams to ensure readiness for any upcoming scheduled production release Release coordination Collaborate with development, QA and infrastructure teams to ensure all components of the release are ready and meet the relevant quality standards Facilitate release readiness reviews, stakeholder meetings, and Go/No-Go decision-making processes Technical release execution Perform DEV to SIT, SIT to UAT and UAT to PROD releases Perform commit and pull requests as part of the release process Ensure various stage gates are completed and signed-off by the relevant owners e.g. Unit test signoffs, SIT signoffs etc. Any issues that happen during deployment are identified and delegated to the relevant person. The release manager then decides the best path forward (e.g. reverting the change that caused the issue and redeploying) to ensure that e.g., SIT/UAT is not blocked for others. Planning which releases branch should go into e.g. SIT/UAT when a production release fix needs to be deployed Aiding developers by answering queries such as whether their JIRA ticket has gone into SIT, how to cherry pick into the SIT branch, when the next SIT release will take place, requesting a delay on the release so they can get their JIRA tickets into the next release Helping developers diagnose why their PR has conflicts with the SIT branch and fixing the conflicts locally if they cannot Updating the CI/CD pipelines as needed with updated parameters or new features Configuring global parameters for each environment Risk Management Identify potential risks in the release pipeline and works with relevant teams to mitigate these risks before they impact timelines Implement and monitor rollback plans and contingencies in the event of release failures Communication and documentation Maintain clear communication with all stakeholders about release status, including any potential issues or delays Document release notes, processes, and lessons learned from each release cycle to ensure continuous improvement Automation and continuous improvement Promote the adoption of DevOps practises, including continuous integration and continuous delivery (CI/CD) Identify opportunities to automate release processes to improve efficiencies and reduce human error Post-release monitoring Ensure proper monitoring and validation of system performance after releases, coordinating with support and infrastructure teams to address any post-release issues promptly Key qualifications: Experience: 5+ years of experience in release management and software development Experience with Agile and DevOps environments essential Proven experience in managing complex software releases across multiple teams and environments Technical skills: Solid understanding of the software development lifecycle (SDLC) Familiarity with release management tools (e.g. Git, Azure Dev Ops, JIRA, Bitbucket, Confluence) Knowledge of automation and CI/CD practises Expert software knowledge in Azure Data Factory, Azure Databricks, SQL, Python and Bash scrips Soft skills: Strong organisational and multi-tasking abilities Excellent communication and stakeholder management skills Ability to manage stress and make decisions under pressure Preferred qualifications: Certification in project management (e.g. PMP, Prince 2) Experience with cloud-based environments (e.g. Azure) Experience in financial services or highly regulated industries

Posted 3 weeks ago

Apply

5.0 - 10.0 years

3 - 5 Lacs

Bengaluru, Delhi / NCR, Mumbai (All Areas)

Work from Office

Azure Databricks Developer Job Title: Azure Databricks Developer Experience: 5+ Years Location: PAN India (Remote/Hybrid as per project requirement) Employment Type: Full-time Job Summary: We are hiring an experienced Azure Databricks Developer to join our dynamic data engineering team. The ideal candidate will have strong expertise in building and optimizing big data solutions using Azure Databricks, Spark, and other Azure data services. Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Databricks and Apache Spark. Integrate and manage large datasets using Azure Data Lake, Azure Data Factory, and other Azure services. Implement Delta Lake for efficient data versioning and performance optimization. Collaborate with cross-functional teams including data scientists and BI developers. Ensure best practices for data security, governance, and compliance. Monitor performance and troubleshoot Spark clusters and data pipelines. Skills & Requirements: Minimum 5 years of experience in data engineering with at least 2+ years in Azure Databricks. Proficiency in Apache Spark (PySpark/Scala). Strong hands-on experience with Azure services ADF, ADLS, Synapse Analytics. Expertise in building and managing ETL/ELT pipelines. Strong SQL skills and experience with performance tuning. Experience with CI/CD pipelines and Azure DevOps is a plus. Good understanding of data modeling, partitioning, and data lake architecture. Role & responsibilities Preferred candidate profile

Posted 3 weeks ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are seeking a skilled Lead Data Engineer with extensive experience in Snowflake, ADF, SQL, and other relevant data technologies to join our team. As a key member of our data engineering team, you will play an instrumental role in designing, developing, and managing data pipelines, working closely with cross-functional teams to drive the success of our data initiatives. Key Responsibilities: Design, implement, and maintain data solutions using Snowflake, ADF, and SQL Server to ensure data integrity, scalability, and high performance. Lead and contribute to the development of data pipelines, ETL processes, and data integration solutions, ensuring the smooth extraction, transformation, and loading of data from diverse sources. Work with MSBI, SSIS, and Azure Data Lake Storage to optimize data flows and storage solutions. Collaborate with business and technical teams to identify project needs, estimate tasks, and set intermediate milestones to achieve final outcomes. Implement industry best practices related to Business Intelligence and Data Management, ensuring adherence to usability, design, and development standards. Perform in-depth data analysis to resolve data issues and improve overall data quality. Mentor and guide junior data engineers, providing technical expertise and supporting the development of their skills. Effectively collaborate with geographically distributed teams to ensure project goals are met in a timely manner. Required Technical Skills: T-SQL, SQL Server, MSBI (SQL Server Integration Services, Reporting Services), Snowflake, Azure Data Factory (ADF), SSIS, Azure Data Lake Storage. Proficient in designing and developing data pipelines, data integration, and data management workflows. Strong understanding of Cloud Data Solutions, with a focus on Azure-based tools and technologies. Nice to Have: Experience with Power BI for data visualization and reporting. Familiarity with Azure Databricks for data processing and advanced analytics.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

0 Lacs

Bengaluru

Work from Office

About Client Hiring for One of the Most Prestigious Multinational Corporations Job Title: Azure Databricks Experience: 5 to 15 years Key Responsibilities : Collaborate with data scientists, analysts, and business stakeholders to build data models and analytics workflows. Design, develop, and deploy scalable data pipelines using Azure Databricks and Apache Spark. Optimize and tune data processing jobs for performance and scalability. Integrate Databricks with other Azure data services like Azure Data Lake, Azure Synapse Analytics, Azure Data Factory, and Azure Blob Storage. Implement data quality checks, error handling, and logging mechanisms. Write clean, reusable, and well-documented code in Python, Scala, or SQL. Participate in code reviews, solution design discussions, and agile ceremonies. Ensure security and compliance standards are met with data access and processing. Technical Skills: Strong hands-on experience with Azure Databricks and Apache Spark. 3+ years of experience in data engineering or big data development. Proficient in Python, SQL, and optionally Scala. Deep understanding of Delta Lake, data partitioning, caching, and streaming/batch processing. Experience working with Azure Data Factory, Azure Data Lake, Azure Synapse, and other Azure services. Familiarity with CI/CD practices and tools like Azure DevOps, Git, and Terraform (optional). Experience with ETL/ELT development and data modeling. Strong problem-solving and communication skills. NOTE : Only immediate and 15 days joiners Notice period : Only immediate and 15 days joiners Location: Bangalore Mode of Work : Hybrid(4 days work from office) Mode of interview:Face to face(walkin) Thanks & Regards, Narmadha S Black and White Business Solutions Pvt.Ltd. Bangalore,Karnataka,India. Direct Number:8067432451 Narmadha.s@blackwhite.in

Posted 3 weeks ago

Apply

3.0 - 5.0 years

13 - 15 Lacs

Hyderabad, Gurugram, Chennai

Work from Office

Classic pipeline Powershell Yaml Biceps Arm Templateterraform/ Biceps CI/CD Experience with data lake and analytics technologies in Azure (e.g., Azure Data Lake Storage, Azure Data Factory, Azure Databricks)- most important Data background with Azure & Powershell. Location: Chennai, Hyderabad, Kolkata, Pune, Ahmedabad, Remote

Posted 3 weeks ago

Apply

4.0 - 8.0 years

4 - 8 Lacs

Mumbai, Hyderabad, Pune

Work from Office

Greetings from Future Focus Infotech!!! We have multiple opportunities Azure Data Engineer Exp: 4+yrs Location : Mumbai / Pune / Hyderabad Job Type- This is a Permanent position with Future Focus Infotech Pvt Ltd & you will be deputed with our client. A small glimpse about Future Focus Infotech Pvt Ltd. (Company URL: www.focusinfotech.com) If you are interested in above opportunity, send updated CV and below information to reema.b@focusinfotech.com Kindly mention the below details. Total Years of Experience: Current CTC: Expected CTC: Notice Period : Current location: Available for interview on weekdays: Pan Card : Thanks & Regards, Reema reema.b@focusinfotech.com 8925798887

Posted 3 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

Hyderabad

Work from Office

Job Title: Java Back End Engineer - Elastic Search Position: Senior Software Engineer Experience: 4-8 Years Category: Software Development Main location: Hyderabad Employment Type: Full Time 5 - 8 years having experience as Java back end developer . • Strong in building Web API's with microservices architecture using Java 11 and above.. • Extensive Knowledge in Monolithic vs Microservices Architecture . • Experience in building scalable and maintainable code base. • Experience in building in containerized services (APIs) using Docker and Kubernetes. • Solid understanding of Design Patterns(MediatR , Singleton, Adapter, Dependency Injection) and Oops concepts and Solid Principles. • Experience in continuous Integration, continuous delivery, with embedded testing practices and quality gates. • Strong knowledge in ORM like NHibernate. • Good to have Spring Boot 3.x. Java 11 above, Kafka, JUnit, Mockito . • Mandatory knowledge and experience on Elastic Search. • Good to have knowledge on Azure Data Factory. If you have experience please share your resume anusha@orcapod.work

Posted 3 weeks ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Gurugram

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Proficiency in MS Fabric,Azure Data Factory, Azure Synapse Analytics, Azure Databricks Extensive knowledge of MS Fabriccomponents: Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, Semantic Model. Integrate Fabric capabilities for seamless data flow, governance, and collaborationacross teams. Strong understanding of Delta Lake, Parquet, and distributed data systems. Strong programming skills in Python, PySpark,Scalaor SparkSQL/TSQLfor data transformations. Your Profile Strong experience in implementation and management of lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL) . Proficiencyin data integration techniques, ETL processes and data pipeline architectures. Understanding of Machine Learning Algorithms & AI/ML frameworks (i.e TensorFlow, PyTorch)and Power BIis an added advantage MS Fabric and PySpark is must. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

10 - 20 Lacs

Hyderabad, Pune, Greater Noida

Work from Office

We're Hiring: Azure Data Engineer at Coforge Ltd. How to Apply: Send your CV to: Gaurav.2.Kumar@coforge.com WhatsApp for queries: 9667427662 Experience Required: 5 to 8 Years Job Locations: Greater Noida, Pune, Hyderabad Employment Type: Full-Time Start Date: Immediate Joiners Preferred About the Role: Coforge Ltd is seeking a skilled and experienced Azure Data Engineer to join our dynamic team. This is an urgent requirement, and we are looking for professionals who can start immediately and contribute to high-impact projects. Key Responsibilities: Design and develop scalable data engineering solutions using Azure cloud services. Collaborate with stakeholders to gather requirements and deliver effective solutions. Build and manage data pipelines using Azure Data Factory and Azure Databricks. Work within a DevOps team to support and enhance data infrastructure. Required Skills: Azure Data Lake Storage Azure Data Factory Azure Functions Event Hub Azure Stream Analytics Azure Databricks Python / PySpark SQL Qualifications & Experience: 48 years of experience in data engineering and cloud development. Strong hands-on expertise in Azure technologies (ADF, ADX, ASA, Event Hub, etc.). Solid experience in SQL and data analytics. Familiarity with Agile, DevOps, and product-based development models. Bachelors degree in Computer Science or a related technical field.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

5 - 15 Lacs

Chennai, Delhi / NCR, Mumbai (All Areas)

Hybrid

Job Description (JD): Azure Databricks / ADF / Synapse , with strong emphasis on Python, SQL, Data Lake, and Data Warehouse : Job Title: Data Engineer Azure (Databricks / ADF / Synapse) Experience: 4 to 7 Years Location: Pan India Employment Type: Full-Time Notice Period: Immediate to 30 Days Job Summary: We are looking for a skilled and experienced Data Engineer with 4 to 8 years of experience in building scalable data solutions on the Microsoft Azure ecosystem . The ideal candidate must have strong hands-on experience with Azure Databricks , Azure Data Factory (ADF) , or Azure Synapse Analytics , along with Python and SQL expertise. Familiarity with Data Lake , Data Warehouse concepts, and end-to-end data pipelines is essential. Key Responsibilities: Requirement gathering and analysis Experience with different databases like Synapse, SQL DB, Snowflake etc. Design and implement data pipelines using Azure Data Factory, Databricks, Synapse Create and manage Azure SQL Data Warehouses and Azure Cosmos DB databases Extract, transform, and load (ETL) data from various sources into Azure Data Lake Storage Implement data security and governance measures Monitor and optimize data pipelines for performance and efficiency Troubleshoot and resolve data engineering issues Provide optimized solution for any problem related to data engineering Ability to work with a variety of sources like Relational DB, API, File System, Realtime streams, CDC etc. Strong knowledge on Databricks, Delta tables Required Skills: 48 years of experience in Data Engineering or related roles. Hands-on experience in Azure Databricks , ADF , or Synapse Analytics Proficiency in Python for data processing and scripting. Strong command over SQL writing complex queries, performance tuning, etc. Experience working with Azure Data Lake Storage and Data Warehouse concepts (e.g., dimensional modeling, star/snowflake schemas). Understanding CI/CD practices in a data engineering context. Excellent problem-solving and communication skills. Good to Have: Experienced in Delta Lake , Power BI , or Azure DevOps . Knowledge of Spark , Scala , or other distributed processing frameworks. Exposure to BI tools like Power BI , Tableau , or Looker . Familiarity with data security and compliance in the cloud. Experience in leading a development team.

Posted 3 weeks ago

Apply

4.0 - 8.0 years

10 - 20 Lacs

Pune

Work from Office

Role & responsibilities Build and operationalize data solutions for business using Azure services in combination with custom solutions Azure Data Factory, Azure Data Flows, Azure Databricks, Azure Data Lake Gen 2, Azure SQL etc • Experience in migrating on-premise data warehouses to data platforms on AZURE cloud • Designing and implementing data engineering, ingestion and transformation functions using ADF, Databricks • Proficient in Py-Spark • Experience in building Python based APIs on Azure Function Apps • Experience on Azure Logic apps • Experience in Lakehouse/Datawarehouse implementation using modern data platform architecture • Capacity Planning and Performance Tuning on ADF & Databricks pipelines • Support data visualization development using Power BI • Exposure across all the SDLC process, including testing and deployment • Experience in relational and dimensional modelling, including big data technologies • Experience in Azure DevOps Build CI/CD pipelines for ADF, ADLS, Databricks, Azure SQL DB etc • Experience of working in secured Azure environments using Azure KeyVaults, Service Principals, and Managed Identities • Understanding of data masking, encryption and other practices used in handling sensitive data • Ability to interact with Business for requirement gathering and query resolutions • Working on off shore office based development teams, collaborating within a team environment and participating in typical project lifecycle activities such as requirement analysis, testing and release • Develop Azure Data skills within the team through knowledge sharing sessions, articles, etc. • Adherence to organisations Risk & Controls requirements • Participate in various initiatives/forums/groups and contribute effectively from idea creation to proto-type development. Support Strategic programs • Good to have knowledge on Apigee (Googles API Management)

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies