Jobs
Interviews

1265 Azure Databricks Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

18 - 20 Lacs

Bengaluru

Hybrid

Job Title: Data Engineer Experience Range: 6-9 years Location : Bengaluru Notice period : immediate - 15 days Job Summary: We are looking for a skilled Data Engineer to design, build, and maintain robust, scalable data pipelines and infrastructure. This role is essential in enabling data accessibility, quality, and insights across the organization. You will work with modern cloud and big data technologies such as Azure Databricks , Snowflake , and DBT , collaborating with cross-functional teams to power data-driven decision-making. Key Responsibilities: For External Candidates: Data Pipeline Development: Build and optimize data pipelines to ingest, transform, and load data from multiple sources using Azure Databricks, Snowflake, and DBT. Data Modeling & Architecture: Design efficient data models and structures within Snowflake ensuring optimal performance and accessibility. Data Transformation: Implement standardized and reusable data transformations in DBT for reliable analytics and reporting. Performance Optimization: Monitor and tune data workflows for performance, scalability, and fault-tolerance. Cross-Team Collaboration: Partner with data scientists, analysts, and business users to support analytics and machine learning projects with reliable, well-structured datasets. Additional Responsibilities (Internal Candidates): Implement and manage CI/CD pipelines using tools such as Jenkins , Azure DevOps , or GitHub . Develop data lake solutions using Scala and Python in a Hadoop/Spark ecosystem. Work with Azure Data Factory and orchestration tools to schedule, monitor, and maintain workflows. Apply deep understanding of Hadoop architecture , Spark, Hive, and storage optimization. Mandatory Skills: Hands-on experience with: Azure Databricks (data processing and orchestration) Snowflake (data warehousing) DBT (data transformation) Azure Data Factory (pipeline orchestration) Strong SQL and data modeling capabilities Proficiency in Scala and Python for data engineering use cases Experience with big data ecosystems : Hadoop, Spark, Hive Knowledge of CI/CD pipelines (Jenkins, GitHub, Azure DevOps) Qualifications: Bachelors degree in Computer Science , Data Engineering , or a related field 6–9 years of relevant experience in data engineering or data infrastructure roles

Posted 1 month ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Noida

Work from Office

company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=5 to 12 , jd= - Azure Data Engineer Job Type:- Full Time Job Location:- Bangalore JD:- We are looking for a skilled Azure Data Engineer to design, develop, and maintain data solutions on the Microsoft Azure cloud platform. The ideal candidate will have experience in data engineering, data pipeline development, ETL/ELT processes, and cloud-based data services. They will be responsible for implementing scalable and efficient data architectures, ensuring data quality, and optimizing data workflows. Key Responsibilities: Design and implement data pipelines using Azure Data Factory (ADF), Azure Databricks, and Azure Synapse Analytics . Develop and optimize ETL/ELT processes to extract, transform, and load data from various sources into Azure Data Lake, Azure SQL Database, and Azure Synapse . Work with Azure Data Lake Storage (ADLS) and Azure Blob Storage to manage large-scale structured and unstructured data. Implement data modeling, data partitioning, and indexing techniques for optimized performance in Azure-based databases. Develop and maintain real-time and batch processing solutions using Azure Stream Analytics and Event Hub . Implement data governance, data security, and compliance best practices using Azure Purview, RBAC, and encryption mechanisms . Optimize query performance and improve data accessibility through SQL tuning and indexing strategies . Collaborate with data scientists, analysts, and business stakeholders to define and implement data solutions that support business insights and analytics. Monitor and troubleshoot data pipeline failures, performance issues, and cloud infrastructure challenges . Stay updated with the latest advancements in Azure data services, big data technologies, and cloud computing . Required Skills & Qualifications: Bachelor’s or master’s degree in computer science , Information Technology, Data Science, or a related field. 5-8 years of experience in data engineering, cloud data platforms, and ETL development . Strong expertise in Azure services such as: Azure Data Factory (ADF) Azure Synapse Analytics Azure Databricks (PySpark, Scala, or Python) Azure Data Lake Storage (ADLS) Azure Blob Storage Azure SQL Database / Cosmos DB Azure Functions & Logic Apps Azure DevOps for CI/CD automation Proficiency in SQL, Python, Scala, or Spark for data transformation and processing. Experience with Big Data frameworks (Apache Spark, Hadoop) and data pipeline orchestration. Hands-on experience in data warehousing concepts, dimensional modelling, and performance optimization . Understanding of data security, governance, and compliance frameworks . Experience with CI/CD pipelines, Terraform, ARM templates, or Infrastructure as Code (Isac) . Knowledge of Power BI or other visualization tools is a plus. Strong problem-solving and troubleshooting skills. Preferred Qualifications: Familiarity with SAP, Salesforce, or third-party APIs for data integration. , Title=Azure Data Engineer, ref=6566567

Posted 1 month ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Chennai

Work from Office

Azure Data Engineer

Posted 1 month ago

Apply

5.0 - 8.0 years

25 - 40 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Salary: 25 to 40 LPA Exp: 5 to 11 years Location : Gurgaon/Bangalore/Pune/Chennai Notice: Immediate to 30 days..!! Key Responsibilities & Skillsets: Common Skillsets : 5+ years of experience in analytics, Pyspark, Python, Spark, SQL and associated data engineering jobs. Presales ** Must have experience with managing and transforming big data sets using pyspark, spark-scala, Numpy pandas Exp with Presales Exp in Gen AI POC Excellent communication & presentation skills Experience in managing Python codes and collaborating with customer on model evolution Good knowledge of data base management and Hadoop/Spark, SQL, HIVE, Python (expertise). Superior analytical and problem solving skills Should be able to work on a problem independently and prepare client ready deliverable with minimal or no supervision Good communication skill for client interaction Data Management Skillsets: Ability to understand data models and identify ETL optimization opportunities. Exposure to ETL tools is preferred Should have strong grasp of advanced SQL functionalities (joins, nested query, and procedures). Strong ability to translate functional specifications / requirements to technical requirements

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Hyderabad, Pune, Delhi / NCR

Work from Office

Jo b Description: Should be an experienced professional with Data Engineering background. Should be able to work without much guidance. Design, develop, and maintain scalable ETL pipelines using Azure Services to process, transform, and load large datasets into AWS Datalake or other data stores. Collaborate with cross-functional teams, including data architects, analysts, and business stakeholders, to gather data requirements and deliver efficient data solutions. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Key Skill Sets Required Experience in designing and hands-on development in cloud-based analytics solutions. Expert level understanding on Azure Data Factory, Azure Synapse, Azure SQL, Azure Data Lake, and Azure App Service is required. Designing and building of data pipelines using API ingestion and Streaming ingestion methods. Knowledge of Dev-Ops processes (including CI/CD) and Infrastructure as code is desirable. Strong experience in common data warehouse modelling principles including Kimball, Inmon. Knowledge in Azure Databricks, Azure IoT, Azure HDInsight + Spark, Azure Stream Analytics, Power BI is desirable Working knowledge of Python is desirable • Experience developing security models.

Posted 1 month ago

Apply

2.0 - 6.0 years

10 - 15 Lacs

Hyderabad

Remote

We are seeking a skilled Azure Data Engineer with hands-on experience in modern data engineering tools and platforms within the Azure ecosystem . The ideal candidate will have a strong foundation in data integration, transformation, and migration , along with a passion for working on complex data migration projects . Job Title: Azure Data Engineer Location: Remote Work Timings: 2:00 PM 11:00 PM IST No of Openings: 3 Please Note: This is a pure Azure-specific role . If your expertise is primarily in AWS or GCP , we kindly request that you do not apply . Key Responsibilities: Design, develop, and maintain data pipelines using Azure Data Factory / Synapse Data Factory to orchestrate and automate data workflows. Build and manage data lakes using Azure Data Lake , enabling secure and scalable storage for structured and unstructured data. Lead and support data migration initiatives (on-prem to cloud, cloud-to-cloud), ensuring minimal disruption and high integrity of data. Perform advanced data transformations using Python , PySpark , and Azure Databricks or Synapse Spark Pools . Develop and optimize SQL / T-SQL queries for data extraction, manipulation, and reporting across Azure SQL services. Design and maintain ETL solutions using SSIS , where applicable. Collaborate with cross-functional teams to understand requirements and deliver data-driven solutions. Monitor, troubleshoot, and continuously improve data workflows to ensure performance, reliability, and scalability. Uphold best practices in data governance, security, and compliance. Required Skills and Qualifications: 2+ years of experience as a Data Engineer, with strong emphasis on Azure technologies. Proven expertise in: Azure Data Factory / Synapse Data Factory Azure Data Lake Azure Databricks / Synapse Spark Python and PySpark SQL / T-SQL SSIS Demonstrated experience in data migration projects and eagerness to take on new migration challenges. Microsoft Certified: Azure Data Engineer Associate certification preferred. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities.

Posted 1 month ago

Apply

5.0 - 7.0 years

15 - 25 Lacs

Pune, Bengaluru

Hybrid

Job Role & responsibilities: - Responsible for architecture designing, building and deploying data systems, pipelines etc Responsible for Designing and implementing agile, scalable, and cost efficiency solution on cloud data services. Responsible for Designing, Implementation, Development & Migration Migrate data from traditional database systems to Cloud environment Architect and implement ETL and data movement solutions. Technical Skill, Qualification & experience required:- 5-7 years of experience in Data Engineering, Azure Cloud Data Engineering, Azure Databricks, datafactory , Pyspark, SQL,Python Hands on experience in Azure Databricks, Data factory, Pyspark, SQL Proficient in Cloud Services-Azure Strong hands-on experience for working with Streaming dataset Hands-on Expertise in Data Refinement using Pyspark and Spark SQL Familiarity with building dataset using Scala. Familiarity with tools such as Jira and GitHub Experience leading agile scrum, sprint planning and review sessions Good communication and interpersonal skills Comfortable working in a multidisciplinary team within a fast-paced environment * Immediate Joiners will be preferred only

Posted 1 month ago

Apply

6.0 - 10.0 years

12 - 22 Lacs

Pune, Bengaluru

Hybrid

Job role & responsibilities:- Designing, developing and maintaining Power BI based dashboards Creating effective data models in Azure Databricks and Power BI Prepare presentations and give walkthroughs on the dashboards to stakeholders Provide Production Monitoring, Incident Management and Regular Maintenance Support Manage 2-5 team members & drive/facilitate on some common priorities The role may involve doing independent analysis or supporting Project Working closely with business on Problem scoping, Development Execution Deployment Regular Maintenance Technical Skills, Experience & Qualification required:- Overall experience of 7-10 years SQL, Python, PySpark coding, debugging Hands on experience in Azure, ADF, Databricks, Azure SQL and Python Proven experience of working with Azure stack Experience in Cubes Experience into Finance, Risk & Investment management domain Creating effective Data models in PBI, SQL or Databricks Development of dashboards using Power BI Desktop Publishing dashboards in Power BI Service Implementation of Power BI gateways Implementation of Row Level Security Understanding complex & unstructured requirementsand translating them into specs Understanding of Relational and other types of databases Understanding of Azure architecture and services IT/ETL technological principles Hands on experience in building interactive & insightful dashboards using Power BI End to end understanding of Power BI Desktop & Service Expertise in writing DAX functions Soft skills and competencies:- Excellent English communication & stakeholder mgmt. skills Excellent analytical skills Experience of managing multiple stakeholders Should have analytical and problem-solving skills. Team mentoring, Training experience. Immediate Joiners will be preferred only

Posted 1 month ago

Apply

6.0 - 10.0 years

27 - 30 Lacs

Bengaluru

Work from Office

We are looking for an experienced Azure Data Factory (ADF) Developer to design, develop, and optimize data integration and ETL pipelines on Azure. The ideal candidate will have strong expertise in ADF, Azure Synapse, Azure Databricks, and other Azure data services. They should be skilled in ETL processes, data warehousing, and cloud-based data solutions while ensuring performance, security, and scalability. Key Responsibilities: Design and develop ETL pipelines using Azure Data Factory (ADF) to ingest, transform, and process data. Integrate ADF with other Azure services like Azure Synapse Analytics, Azure Data Lake, Azure Databricks, and SQL Database. Develop data transformations using Mapping Data Flows, SQL, and Python/PySpark. Optimize ADF performance, data flow, and cost efficiency for scalable data solutions. Automate data pipelines, scheduling, and orchestration using triggers and event-driven workflows. Troubleshoot ADF pipeline failures, performance bottlenecks, and debugging issues. Work with Azure Monitor, Log Analytics, and Application Insights for data pipeline monitoring. Ensure data security, governance, and compliance with Azure security best practices. Collaborate with data engineers, cloud architects, and business stakeholders to define data strategies. Implement CI/CD for data pipelines using Azure DevOps, Git, and Infrastructure as Code (Terraform, ARM templates, or Bicep).

Posted 1 month ago

Apply

6.0 - 8.0 years

1 - 6 Lacs

Hyderabad, Bengaluru

Work from Office

Data Architect Job Description: Senior Data Architect with experience in design, build, and optimization of complex data landscapes and legacy modernization projects. The ideal candidate will have deep expertise in database management, data modeling, cloud data solutions, and ETL (Extract, Transform, Load) processes. This role requires a strong leader capable of guiding data teams and driving the design and implementation of scalable data architectures. Key areas of expertise include Design and implement scalable and efficient data architectures to support business needs. Develop data models (conceptual, logical, and physical) that align with organizational goals. Lead the database design and optimization efforts for structured and unstructured data. Establish ETL pipelines and data integration strategies for seamless data flow. Define data governance policies, including data quality, security, privacy, and compliance. Work closely with engineering, analytics, and business teams to understand requirements and deliver data solutions. Oversee cloud-based data solutions (AWS, Azure, GCP) and modern data warehouses (Snowflake, BigQuery, Redshift). Ensure high availability, disaster recovery, and backup strategies for critical databases. Evaluate and implement emerging data technologies, tools, and frameworks to improve efficiency. Conduct data audits, performance tuning, and troubleshooting to maintain optimal performance. Qualifications: Bachelors or Masters degree in Computer Science, Information Systems, or a related field. 13+ years of experience in data modeling, including conceptual, logical, and physical data design. 5 8 years of experience in cloud data lake platforms such as AWS Lake Formation, Delta Lake, Snowflake or Google Big Query. Proven experience with NoSQL databases and data modeling techniques for non- relational data. Experience with data warehousing concepts, ETL/ELT processes, and big data frameworks (e.g., Hadoop, Spark). Hands-on experience delivering complex, multi-module projects in diverse technology ecosystems. Strong understanding of data governance, data security, and compliance best practices. Proficiency with data modeling tools (e.g., ER/Studio, ERwin, PowerDesigner). Excellent leadership and communication skills, with a proven ability to manage teams and collaborate with stakeholders. Preferred Skills: Experience with modern data architectures, such as data fabric or data mesh. Knowledge of graph databases and modeling for technologies like Neo4j. Proficiency with programming languages like Python, Scala, or Java. Understanding of CI/CD pipelines and DevOps practices in data engineering. Please Note: Must have Experience in Data Governance & Data Modeling.

Posted 1 month ago

Apply

7.0 - 10.0 years

20 - 30 Lacs

Pune, Chennai

Hybrid

YOULL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Senior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. WHAT YOU’LL DO: Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner Qualifications WE’RE LOOKING FOR PEOPLE WHO HAVE: 8-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP)

Posted 1 month ago

Apply

2.0 - 6.0 years

5 - 8 Lacs

Pune

Work from Office

Supports, develops, and maintains a data and analytics platform. Effectively and efficiently processes, stores, and makes data available to analysts and other consumers. Works with Business and IT teams to understand requirements and best leverage technologies to enable agile data delivery at scale. Note:- Although the role category in the GPP is listed as Remote, the requirement is for a Hybrid work model. Key Responsibilities: Oversee the development and deployment of end-to-end data ingestion pipelines using Azure Databricks, Apache Spark, and related technologies. Design high-performance, resilient, and scalable data architectures for data ingestion and processing. Provide technical guidance and mentorship to a team of data engineers. Collaborate with data scientists, business analysts, and stakeholders to integrate various data sources into the data lake/warehouse. Optimize data pipelines for speed, reliability, and cost efficiency in an Azure environment. Enforce and advocate for best practices in coding standards, version control, testing, and documentation. Work with Azure services such as Azure Data Lake Storage, Azure SQL Data Warehouse, Azure Synapse Analytics, and Azure Blob Storage. Implement data validation and data quality checks to ensure consistency, accuracy, and integrity. Identify and resolve complex technical issues proactively. Develop reliable, efficient, and scalable data pipelines with monitoring and alert mechanisms. Use agile development methodologies, including DevOps, Scrum, and Kanban. External Qualifications and Competencies Technical Skills: Expertise in Spark, including optimization, debugging, and troubleshooting. Proficiency in Azure Databricks for distributed data processing. Strong coding skills in Python and Scala for data processing. Experience with SQL for handling large datasets. Knowledge of data formats such as Iceberg, Parquet, ORC, and Delta Lake. Understanding of cloud infrastructure and architecture principles, especially within Azure. Leadership & Soft Skills: Proven ability to lead and mentor a team of data engineers. Excellent communication and interpersonal skills. Strong organizational skills with the ability to manage multiple tasks and priorities. Ability to work in a fast-paced, constantly evolving environment. Strong problem-solving, analytical, and troubleshooting abilities. Ability to collaborate effectively with cross-functional teams. Competencies: System Requirements Engineering: Uses appropriate methods to translate stakeholder needs into verifiable requirements. Collaborates: Builds partnerships and works collaboratively to meet shared objectives. Communicates Effectively: Delivers clear, multi-mode communications tailored to different audiences. Customer Focus: Builds strong customer relationships and delivers customer-centric solutions. Decision Quality: Makes good and timely decisions to keep the organization moving forward. Data Extraction: Performs ETL activities and transforms data for consumption by downstream applications. Programming: Writes and tests computer code, version control, and build automation. Quality Assurance Metrics: Uses measurement science to assess solution effectiveness. Solution Documentation: Documents information for improved productivity and knowledge transfer. Solution Validation Testing: Ensures solutions meet design and customer requirements. Data Quality: Identifies, understands, and corrects data flaws. Problem Solving: Uses systematic analysis to address and resolve issues. Values Differences: Recognizes the value that diverse perspectives bring to an organization. Preferred Knowledge & Experience: Exposure to Big Data open-source technologies (Spark, Scala/Java, Map-Reduce, Hive, HBase, Kafka, etc.). Experience with SQL and working with large datasets. Clustered compute cloud-based implementation experience. Familiarity with developing applications requiring large file movement in a cloud-based environment. Exposure to Agile software development and analytical solutions. Exposure to IoT technology. Additional Responsibilities Unique to this Position Qualifications: Education: Bachelors or Masters degree in Computer Science, Information Technology, Engineering, or a related field. Experience: 3 to 5 years of experience in data engineering or a related field. Strong hands-on experience with Azure Databricks, Apache Spark, Python/Scala, CI/CD, Snowflake, and Qlik for data processing. Experience working with multiple file formats like Parquet, Delta, and Iceberg. Knowledge of Kafka or similar streaming technologies. Experience with data governance and data security in Azure. Proven track record of building large-scale data ingestion and ETL pipelines in cloud environments. Deep understanding of Azure Data Services. Experience with CI/CD pipelines, version control (Git), Jenkins, and agile methodologies. Familiarity with data lakes, data warehouses, and modern data architectures. Experience with Qlik Replicate (optional).

Posted 1 month ago

Apply

4.0 - 9.0 years

8 - 12 Lacs

Chennai

Remote

Expertise in ADF, Azure Databricks and Python. The ideal candidate will be responsible for developing and optimizing data pipelines, integrating cloud data services, and building scalable data processing workflows in the Azure ecosystem.

Posted 1 month ago

Apply

4.0 - 8.0 years

12 - 20 Lacs

Hyderabad

Work from Office

Share updated CV to salveen.shaik@covasant.com Job Title: Azure Data Engineer with Databricks Location: Hyderabad (WFO) Experience: 5 8 years Job Type: Full-time Client Industry: [e.g., Healthcare, BFSI, ] Job Level: Senior Engineer / Lead / Architect Role Overview: We are looking for a skilled Azure Data Engineer with expertise in Databricks to join our high-performing data and AI team for a critical client engagement. The ideal candidate will have strong hands-on experience in building scalable data pipelines , data transformation , and real-time data processing using Azure Data Services and Databricks . You will work closely with cross-functional teams including data scientists, architects, business analysts, and client stakeholders to design and implement end-to-end data solutions in a cloud-native environment. Key Responsibilities: Design, develop, and deploy end-to-end data pipelines using Azure Databricks , Azure Data Factory , and Azure Synapse Analytics . Perform data ingestion , data wrangling , and ETL/ELT processes from various structured and unstructured data sources (e.g., APIs, on-prem databases, flat files). Optimize and tune Spark-based jobs and Databricks notebooks for performance and scalability. Implement best practices for CI/CD , code versioning , and testing in a Databricks environment using DevOps pipelines . Design data lake and data warehouse solutions using Delta Lake and Synapse Analytics . Ensure data security , governance , and compliance using Azure-native tools (e.g., Azure Purview , Key Vault , RBAC ). Collaborate with data scientists to enable feature engineering and model training within Databricks. Write efficient SQL and PySpark code for data transformation and analytics. Monitor and maintain existing data pipelines and troubleshoot issues in a production environment. Document technical solutions, architecture diagrams, and data lineage as part of delivery. Mandatory Skills & Technologies: Azure Cloud Services : Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake Storage (Gen2), Azure Key Vault, Azure Functions, Azure Monitor Databricks Platform : Delta Lake, Databricks Notebooks, Job Clusters, MLFlow (optional), Unity Catalog Programming Languages : PySpark, SQL, Python Data Pipelines : ETL/ELT pipeline design and orchestration Version Control & DevOps : Git, Azure DevOps, CI/CD pipelines Data Modeling : Star/Snowflake schema, Dimensional modeling Performance Tuning : Spark job optimization, Data partitioning strategies Data Governance & Security : Azure Purview, RBAC, Data Masking Nice to Have: Experience with Kafka , Event Hub , or other real-time streaming platforms Exposure to Power BI or other visualization tools Knowledge of Terraform or ARM templates for infrastructure as code Experience in MLOps and integration with MLFlow for model lifecycle management Certifications (Good to Have): Microsoft Certified: Azure Data Engineer Associate Databricks Certified Data Engineer Associate / Professional DP-203: Data Engineering on Microsoft Azure Soft Skills: Strong communication and client interaction skills Analytical thinking and problem-solving Agile mindset with familiarity in Scrum/Kanban Team player with mentoring ability for junior engineers

Posted 1 month ago

Apply

10.0 - 14.0 years

20 - 35 Lacs

Greater Noida

Remote

Roles and Responsibilities Design, develop, test, and deploy data pipelines using Azure Data Factory (ADF) to extract, transform, and load large datasets from various sources into Azure storage solutions such as Azure Blob Storage, Azure Data Lake Storage, and Cosmos DB. Collaborate with cross-functional teams to gather requirements for ETL processes and design scalable architectures that meet business needs. Develop complex data transformations using PySpark on Azure Databricks to integrate with other systems and services. Troubleshoot issues related to ADF pipeline failures by analyzing logs, debugging techniques, and working closely with stakeholders.

Posted 1 month ago

Apply

4.0 - 9.0 years

8 - 12 Lacs

Hyderabad, Gurugram

Work from Office

Develop and maintain SQL and NoSQL databases in Azure, including schema design, stored procedures and data integrity Continuous improvement of data pipelines using Azure Data Factory As a foundation Developing insightful and interactive business Minimum of4 year Database Administrator Experience with Microsoft SQL Server Minimum of 2 years of Azure SQL Database Kindly sent

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 12 Lacs

Hyderabad, Gurugram

Work from Office

Develop and maintain SQL and NoSQL databases in Azure, including schema design, stored procedures and data integrity Continuous improvement of data pipelines using Azure Data Factory As a foundation Developing insightful and interactive business Minimum of4 year Database Administrator Experience with Microsoft SQL Server Minimum of 2 years of Azure SQL Database

Posted 1 month ago

Apply

6.0 - 8.0 years

3 - 6 Lacs

Pune

Work from Office

Role & responsibilities Job Title: Developer Work Location: Pune, MH Skill Required: Azure Data Factory Experience Range in Required Skills: 6 - 8 Years Job Description: (6+ years) Azure, ADF, Databricks, Python Essential Skills: (6+ years) Azure, ADF, Databricks, Python

Posted 1 month ago

Apply

3.0 - 8.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Microsoft 365, Microsoft PowerShell, Microsoft 365 Security & Compliance Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Tech Support Practitioner, you will serve as the vital link between clients and the systems or applications they utilize. Your typical day will involve engaging with clients to understand their needs, addressing any issues they encounter, and ensuring that our high-quality systems operate seamlessly. You will leverage your exceptional communication skills to provide clarity and support, while also utilizing your in-depth product knowledge to design effective resolutions tailored to client requirements. Your commitment to quality will be evident in every interaction, as you strive to maintain the integrity and performance of our world-class systems. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate training sessions for team members to enhance their understanding of system functionalities.- Develop and maintain comprehensive documentation for troubleshooting processes and client interactions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft 365, Microsoft PowerShell, Microsoft 365 Security & Compliance.- Strong understanding of cloud-based solutions and their implementation.- Experience with system integration and application support.- Ability to analyze and resolve technical issues efficiently.- Familiarity with security protocols and compliance standards related to Microsoft 365. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft 365.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 8.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Microsoft 365, Microsoft PowerShell, Microsoft 365 Security & Compliance Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Tech Support Practitioner, you will serve as the vital link between clients and the systems or applications they utilize. Your typical day will involve engaging with clients to understand their needs, addressing their concerns, and ensuring that our world-class systems operate seamlessly. You will leverage your exceptional communication skills to provide clarity and support, while also utilizing your deep product knowledge to design effective resolutions for any issues that arise. Your commitment to quality will be evident in every interaction, as you strive to enhance the client experience and maintain system integrity. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate training sessions for team members to enhance their understanding of system functionalities.- Develop and maintain comprehensive documentation for processes and procedures to ensure consistency and quality. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft 365, Microsoft PowerShell, Microsoft 365 Security & Compliance.- Strong understanding of cloud-based solutions and their implementation.- Experience with troubleshooting and resolving technical issues related to Microsoft 365 applications.- Familiarity with security protocols and compliance measures within Microsoft 365 environments.- Ability to communicate technical information effectively to non-technical stakeholders. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft 365.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

3.0 - 4.0 years

8 - 13 Lacs

Noida, Gurugram

Work from Office

R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Position Title Specialist Reports to Program Manager- Analytics BI Location Noida Position summary A user shall work with the development team and responsible for development task as individual contribution .He/she should be technical sound and able to communicate with client perfectly . Key duties & responsibilities Work as Specialist Data engineering project for E2E Analytics. Ensure Project delivery on time. Mentor other team mates and guide them. Will take the requirement from client and communicate as well. Ensure Timely documents creation for knowledge base, user guides, and other various communications systems. Ensures delivery against Business needs, team goals and objectives, i.e., meeting commitments and coordinating overall schedule. Works with large datasets in various formats, integrity/QA checks, and reconciliation for accounting systems. Leads efforts to troubleshoot and solve process or system related issues. Understand, support, enforce and comply with company policies, procedures and Standards of Business Ethics and Conduct. Experience working with Agile methodology Experience, Skills and Knowledge: Bachelors degree in computer science or equivalent experience is required. B.Tech/MCA preferable. Minimum 3 4 years experience. Excellent communications and strong commitment for delivering the highest level of service Technical Skills Expert knowledge and experience working with Spark, Scala Experience in Azure data Factory ,Azure Data bricks, data Lake Experience working with SQL and Snowflake Experience with data integration tools such as SSIS, ADF Experience with programming languages such as Python Expert in Astronomer Airflow. Experience with programming languages such as Python, Spark, Scala Experience or exposure on Microsoft Azure Data Fundamentals Key competency profile Own your development by implementing and sharing your learnings Motivate each other to perform at our highest level Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Communicate effectively if there any challenge. Accountability and Responsibility should be there. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

7.0 - 10.0 years

6 - 10 Lacs

Noida

Work from Office

Staff Software Engineer (Data Engineer) R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is a publicly-traded organization with employees throughout the US and multiple INDIA locations. Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Position summary We are seeking a Staff Data Engineer with 7-10 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company Key duties & responsibilities Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Qualification B.E/B. Tech/MCA or equivalent professional degree Experience, Skills and Knowledge Deep knowledge and experience working with SSIS, T-SQL Experienced in Azure data factory, Azure Data bricks & Azure Data Lake. Experience working with any language like Python/SCALA Experience working with SQL and NoSQL database systems such as MongoDB Experience in distributed system architecture design Experience with cloud environments (Azure Preferred) Experience with acquiring and preparing data from primary and secondary disparate data sources (real-time preferred) Experience working on large scale data product implementation, responsible for technical delivery, mentoring and managing peer engineers Experience working with Databricks preferred Experience working with agile methodology preferred Healthcare industry experience preferred Key competency profile Spot new opportunities by anticipating change and planning accordingly Find ways to better serve customers and patients. Be accountable for customer service of highest quality Create connections across teams by valuing differences and including others Own your developmentby implementing and sharing your learnings Motivate each other to perform at our highest level Help people improve by learning from successes and failures Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

5.0 - 7.0 years

9 - 14 Lacs

Noida, Gurugram

Work from Office

R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. Position Title Senior Specialist Reports to Program Manager- Analytics BI Position summary: A Specialist shall work with the development team and responsible for development task as individual contribution. He/she should be able to mentor team and able to help in resolving issues. He/she should be technical sound and able to communicate with client perfectly. Key duties & responsibilities Work as Lead Developer Data engineering project for E2E Analytics. Ensure Project delivery on time. Mentor other team mates and guide them. Will take the requirement from client and communicate as well. Ensure Timely documents creation for knowledge base, user guides, and other various communications systems. Ensures delivery against Business needs, team goals and objectives, i.e., meeting commitments and coordinating overall schedule. Works with large datasets in various formats, integrity/QA checks, and reconciliation for accounting systems. Leads efforts to troubleshoot and solve process or system related issues. Understand, support, enforce and comply with company policies, procedures and Standards of Business Ethics and Conduct. Experience working with Agile methodology Experience, Skills and Knowledge: Bachelors degree in Computer Science or equivalent experience is required. B.Tech/MCA preferable. Minimum 5 7 years experience. Excellent communications and strong commitment for delivering the highest level of service Technical Skills Expert knowledge and experience working with Spark, Scala Experience in Azure data Factory ,Azure Data bricks, data Lake Experience working with SQL and Snowflake Experience with data integration tools such as SSIS, ADF Experience with programming languages such as Python Expert in Astronomer Airflow. Experience with programming languages such as Python, Spark, Scala Experience or exposure on Microsoft Azure Data Fundamentals Key competency profile: Own youre a development by implementing and sharing your learnings Motivate each other to perform at our highest level Work the right way by acting with integrity and living our values every day Succeed by proactively identifying problems and solutions for yourself and others. Communicate effectively if there any challenge. Accountability and Responsibility should be there. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

5.0 - 7.0 years

6 - 10 Lacs

Noida

Work from Office

R1 is a leading provider of technology-driven solutions that help hospitals and health systems to manage their financial systems and improve patients experience. We are the one company that combines the deep expertise of a global workforce of revenue cycle professionals with the industry's most advanced technology platform, encompassing sophisticated analytics, Al, intelligent automation and workflow orchestration. R1 is a place where we think boldly to create opportunities for everyone to innovate and grow. A place where we partner with purpose through transparency and inclusion. We are a global community of engineers, front-line associates, healthcare operators, and RCM experts that work together to go beyond for all those we serve. Because we know that all this adds up to something more, a place where we're all together better. R1 India is proud to be recognized amongst Top 25 Best Companies to Work For 2024, by the Great Place to Work Institute. This is our second consecutive recognition on this prestigious Best Workplaces list, building on the Top 50 recognition we achieved in 2023. Our focus on employee wellbeing and inclusion and diversity is demonstrated through prestigious recognitions with R1 India being ranked amongst Best in Healthcare, Top 100 Best Companies for Women by Avtar & Seramount, and amongst Top 10 Best Workplaces in Health & Wellness. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare work better for all by enabling efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 16,000+ strong in India with presence in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. R1 RCM Inc. is a leading provider of technology-enabled revenue cycle management services which transform and solve challenges across health systems, hospitals and physician practices. Headquartered in Chicago, R1 is a publicly -traded organization with employees throughout the US and multiple INDIA locations.Our mission is to be the one trusted partner to manage revenue, so providers and patients can focus on what matters most. Our priority is to always do what is best for our clients, patients and each other. With our proven and scalable operating model, we complement a healthcare organizations infrastructure, quickly driving sustainable improvements to net patient revenue and cash flows while reducing operating costs and enhancing the patient experience. Description: We are seeking a Data Engineer with 5-7 year of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company.QualificationsDeep knowledge and experience working with Python/Scala and Spark Experienced in Azure data factory, Azure Data bricks, Azure Blob Storage, Azure Data Lake, Delta lake. Experience working on Unity Catalog, Apache Parquet Experience with Azure cloud environments Experience with acquiring and preparing data from primary and secondary disparate data sources Experience working on large scale data product implementation, responsible for technical delivery. Experience working with agile methodology preferred Healthcare industry experience preferredResponsibilities: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply

7.0 - 10.0 years

6 - 10 Lacs

Noida

Work from Office

R1 RCM India is proud to be recognized amongst India's Top 50 Best Companies to Work For TM 2023 by Great Place To Work Institute. We are committed to transform the healthcare industry with our innovative revenue cycle management services. Our goal is to make healthcare simpler and enable efficiency for healthcare systems, hospitals, and physician practices. With over 30,000 employees globally, we are about 14,000 strong in India with offices in Delhi NCR, Hyderabad, Bangalore, and Chennai. Our inclusive culture ensures that every employee feels valued, respected, and appreciated with a robust set of employee benefits and engagement activities. : We are seeking a Staff Data Engineer with 7-10 years of experience to join our Data Platform team. This role will report to the Manager of data engineering and be involved in the planning, design, and implementation of our centralized data warehouse solution for ETL, reporting and analytics across all applications within the company. Deep knowledge and experience working with Scala and Spark. Experienced in Azure data factory, Azure Data bricks, Azure Synapse Analytics, Azure Data Lake. Experience working in Full stack development in .Net & Angular. Experience working with SQL and NoSQL database systems such as MongoDB, Couchbase. Experience in distributed system architecture design. Experience with cloud environments (Azure Preferred). Experience with acquiring and preparing data from primary and secondary disparate data sources (real-time preferred). Experience working on large scale data product implementation, responsible for technical delivery, mentoring and managing peer engineers. Experience working with Databricks is preferred. Experience working with agile methodology is preferred. Healthcare industry experience is preferred. Job Responsibilities: Collaborate with and across Agile teams to design, develop, test, implement, and support technical solutions. Work with other team with deep experience in ETL process, distributed microservices, and data science domains to understand how to centralize their data. Share your passion for staying experimenting with and learning new technologies. Perform thorough data analysis, uncover opportunities, and address business problems. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Working in an evolving healthcare setting, we use our shared expertise to deliver innovative solutions. Our fast-growing team has opportunities to learn and grow through rewarding interactions, collaboration and the freedom to explore professional interests. Our associates are given valuable opportunities to contribute, to innovate and create meaningful work that makes an impact in the communities we serve around the world. We also offer a culture of excellence that drives customer success and improves patient care. We believe in giving back to the community and offer a competitive benefits package. To learn more, visitr1rcm.com Visit us on Facebook

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies