Jobs
Interviews

1265 Azure Databricks Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 10.0 years

8 - 12 Lacs

Noida

Work from Office

The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 14 Lacs

Noida

Remote

Role : Data Modeler Lead Location : Remote Experience : 10years+ Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 2 weeks ago

Apply

8.0 - 13.0 years

8 - 13 Lacs

Telangana

Work from Office

Key Responsibilities: Team Leadership: Lead and mentor a team of Azure Data Engineers, providing technical guidance and support. Foster a collaborative and innovative team environment. Conduct regular performance reviews and set development goals for team members. Organize training sessions to enhance team skills and technical capabilities. Azure Data Platform: Design, implement, and optimize scalable data solutions using Azure data services such as Azure Databricks, Azure Data Factory, Azure SQL Database, and Azure Synapse Analytics. Ensure data engineering best practices and data governance are followed. Stay up-to-date with Azure data technologies and recommend improvements to enhance data processing capabilities. Data Architecture: Collaborate with data architects to design efficient and scalable data architectures. Define data modeling standards and ensure data integrity, security, and governance compliance. Project Management: Work with project managers to define project scope, goals, and deliverables. Develop project timelines, allocate resources, and track progress. Identify and mitigate risks to ensure successful project delivery. Collaboration & Communication: Collaborate with cross-functional teams including data scientists, analysts, and business stakeholders to deliver data-driven solutions. Communicate effectively with stakeholders to understand requirements and provide updates. Qualifications: Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Proven experience as a Team Lead or Manager in data engineering. Extensive experience with Azure data services and cloud technologies. Expertise in Azure Databricks, PySpark, and SQL. Strong understanding of data engineering best practices, data modeling, and ETL processes. Experience with agile development methodologies. Certifications in Azure data services (preferred). Preferred Skills: Experience with big data technologies and data warehousing solutions. Familiarity with industry standards and compliance requirements. Ability to lead and mentor a team.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

15 Lacs

Hyderabad, Bengaluru

Work from Office

Role: Advanced Data Engineer Location: Hyderabad Experience: 4-8Years Notice Period: Imm to 30 Days Tech Stack: Proficiency in Python, SQL, Databricks, Snowflake Data Modeling, ETL process, Apache Spark and PySpark, data integration and workflow orchestration, real time data processing experience/frameworks, Cloud experience (Azure preferred) Job Description Minimum 5 years of experience in Data Engineering role that involves Analyzing, organizing raw data, and building data systems pipelines in Cloud Platform [Azure] Experienced in migrating data from on-prem to cloud-based solution architectures [Azure] Extensive experience with Python, Spark and SQL Experience in developing ETL processes. Proficient with Azure Data Lake, Azure Data Factory, Azure SQL, Azure Databricks, Azure Synapse Analytics or equivalent tools and technologies Experience building data lakes data warehouses to support operational intelligence and business intelligence. Excellent written and verbal communication skills

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Bachelors degree in Computer Science, Engineering, or a related field (or equivalent experience) and able to demonstrate high proficiency in programming fundamentals. * 5-10 years experience • At least 3 years of proven experience as a Data Engineer or similar role dealing with data and ETL processes. • Strong knowledge of Microsoft Azure services, including Azure Data Factory, Azure Synapse, Azure Databricks, Azure Blob Storage and Azure Data Lake Gen 2. • Experience utilizing SQL DML to query modern RDBMS in an efficient manner (e.g., SQL Server, PostgreSQL). • Strong understanding of Software Engineering principles and how they apply to Data Engineering (e.g., CI/CD, version control, testing). • Experience with big data technologies (e.g., Spark). • Strong problem-solving skills and attention to detail. • Excellent communication and collaboration skills.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Hyderabad

Remote

We are seeking a skilled Azure Data Engineer with strong Power BI capabilities to design, build, and maintain enterprise data lakes on Azure, ingest data from diverse sources, and develop insightful reports and dashboards. This role requires hands-on experience in Azure data services, ETL processes, and BI visualization to support data-driven decision-making. Key Responsibilities Design and implement end-to-end data pipelines using Azure Data Factory (ADF) for batch ingestion from various enterprise sources. Build and maintain a multi-zone Medallion Architecture data lake in Azure Data Lake Storage Gen2 (ADLS Gen2), including raw staging with metadata tracking, silver layer transformations (cleansing, enrichment, schema standardization), and gold layer curation (joins, aggregations). Perform data processing and transformations using Azure Databricks (PySpark/SQL) and ADF, ensuring data lineage, traceability, and compliance. Integrate data governance and security using Databricks Unity Catalog, Azure Active Directory (Azure AD), Role-Based Access Control (RBAC), and Access Control Lists (ACLs) for fine-grained access. Develop and optimize analytical reports and dashboards in Power BI, including KPI identification, custom visuals, responsive designs, and export functionalities to Excel/Word. Conduct data modeling, mapping, and extraction during discovery phases, aligning with functional requirements for enterprise analytics. Collaborate with cross-functional teams to define schemas, handle API-based ingestion (REST/OData), and implement audit trails, logging, and compliance with data protection policies. Participate in testing (unit, integration, performance), UAT support, and production deployment, ensuring high availability and scalability. Create training content and provide knowledge transfer on data lake implementation and Power BI usage. Monitor and troubleshoot pipelines, optimizing for batch processing efficiency and data quality. Required Qualifications Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field. 5+ years of experience in data engineering, with at least 3 years focused on Azure cloud services. Proven expertise in Azure Data Factory (ADF) for ETL/orchestration, Azure Data Lake Storage Gen2 (ADLS Gen2) for data lake management, and Azure Databricks for Spark-based transformations. Strong proficiency in Power BI for report and dashboard development, including DAX, custom visuals, data modeling, and integration with Azure data sources (e.g., DirectQuery or Import modes). Hands-on experience with Medallion Architecture (raw/silver/gold layers), data wrangling, and multi-source joins. Familiarity with API ingestion (REST, OData) from enterprise systems. Solid understanding of data governance tools like Databricks Unity Catalog, Azure AD for authentication, and RBAC/ACLs for security. Proficiency in SQL, PySpark, and data modeling techniques for dimensional and analytical schemas. Experience in agile methodologies, with the ability to deliver phased outcomes. Preferred Skills Certifications such as Microsoft Certified: Azure Data Engineer Associate (DP-203) or Power BI Data Analyst Associate (PL-300). Knowledge of Azure Synapse Analytics, Azure Monitor for logging, and integration with hybrid/on-premises sources. Experience in domains like energy, mobility, or enterprise analytics, with exposure to moderate data volumes. Strong problem-solving skills, with the ability to handle rate limits, pagination, and dynamic data in APIs. Familiarity with tools like Azure DevOps for CI/CD and version control of pipelines/notebooks. What We Offer Opportunity to work on cutting-edge data transformation projects. Competitive salary and benefits package. Collaborative environment with access to advanced Azure tools and training. Flexible work arrangements and professional growth opportunities. If you are a proactive engineer passionate about building scalable data solutions and delivering actionable insights, apply now. Role & responsibilities Preferred candidate profile

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

The project duration for this role is 6 months with a monthly rate of 1.60 Lac. The ideal candidate should possess 4-7 years of experience and the work location is in Bangalore with a Hybrid setup. Key Responsibilities: - Demonstrated strong proficiency in Python, LLMs, Lang Chain, Prompt Engineering, and related Gen AI technologies. - Proficiency in working with Azure Databricks. - Ability to showcase strong analytical skills, problem-solving capabilities, and effective stakeholder communication. - A solid understanding of data governance frameworks, compliance requirements, and internal controls. - Hands-on experience in data quality rule development, profiling, and implementation. - Familiarity with Azure Data Services such as Data Lake, Synapse, and Blob Storage. Preferred Qualifications: - Previous experience in supporting AI/ML pipelines, particularly with GenAI or LLM based models. - Proficiency in Python, PySpark, SQL, and knowledge of Delta Lake architecture. - Hands-on experience with Azure Data Lake, Azure Data Factory, and Azure Synapse Analytics. - Prior experience in data engineering, with a strong expertise in Databricks.,

Posted 2 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Bengaluru

Work from Office

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 43,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow. With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Position Details: Function/Department : Advanced Analytics Location : Bangalore, India Employment Type: Full-time Role Overview – Full stack Data Scientist We are seeking a full stack data scientist in Advanced Analytics team, who will be at the foreftont of developing new innovative data driven solutions with bleeding edge machine learning and AI solution end to end. AIML Data Scientist is a technical job that uses AI & machine learning techniques to automate underwriting processes, improve claims outcomes and/or risk solutions. This person will develop vibrant data science solutions which require data engineering, AlML algorithms and Ops engineering skills to develop and deploy it for the business. Ideal candidate for this role is someone with a strong education in computer science, data science, statistics, applied math or a related field, and who is eager to tackle problems with innovative thinking without compromising detail business insights. You are adept at solving diverse problems by utilizing a variety of different tools, strategies, machine learning techniques, algorithms and programming languages. Major Responsibilities Work with business partners globally, determine analyses to be performed, manage deliverables against timelines, present of results and implement the model. Use broad spectrum of Machine Learning, text and image AI models to extract impactful features from structured/unstructured data. Develop and implement models that help with automating, getting insights, make smart decisions; Ensure that the model is able to meet the desired KPIs post-production. Develop and deploy scalable and efficient machine learning models. Package and publish codes and solutions in reusable format python package format- (Pypi, Scikit-learn pipeline,..) Keep the code ready for seamless building of CI/CD pipelines and workflows for machine learning applications. Ensure high quality code that meets business objectives, quality standards and secure web development guidelines. Building reusable tools to streamline the modeling pipeline and sharing knowledge Build real-time monitoring and alerting systems for machine learning systems. Develop and maintain automated testing and validation infrastructure. Troubleshoot pipelines across multiple touchpoints like CI Server, Artifact storage and Deployment cluster. Implement best practices for versioning, monitoring and reusability. Skills and Qualifications: Sound understanding of ML concepts, Supervised / Unsupervised Learning, Ensemble Techniques, Hyperparameter Good knowledge of Random Forest, XGBoost, SVM, Clustering, building data pipelines in Azure/Databricks, deep learning models, OpenCV, Bert and new transformer models for NLU, LLM application in ML> Strong experience with Azure cloud computing and containerization technologies (like Docker, Kubernetes). 4-6 years of experience in delivery end to end data science models. Experience with Python/OOPs programming languages and data science frameworks like (Pandas, Numpy, TensorFlow, Keras, PyTorch, sklearn). Knowledge of DevOps tools such as Git, Jenkins, Sonar, Nexus is must. Building python wheels and debugging build process. Data pipeline building and debugging (by creating and following log traces). Basic knowledge of DevOps practices. Concepts of Unit Testing and Test-Driven development. SDE skills like OOP and Functional programming are an added advantage. Experience with Databricks and its ecosystem is an added advantage. analytics/statistics/mathematics or related domain.

Posted 2 weeks ago

Apply

4.0 - 8.0 years

10 - 15 Lacs

Ahmedabad, Bengaluru, Mumbai (All Areas)

Work from Office

Responsibilities for Azure Data Engineer : Job Accountabilities Hands on Experience in Azure Data Components like ADF / Databricks / Azure SQL Good Programming Logic Sense in SQL Good PySpark knowledge for Azure Data Bricks Data Lake and Data Warehouse Concept Understanding - Unit and Integration testing understanding Good communication skill to express thoughts and interact with business users Understanding of Data Security and Data Compliance Agile Model Understanding Project Documentation Understanding Certification (Good to have) Domain Knowledge Mandatory skill sets: Azure DE, ADB, ADF, ADL

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As a Revenue Automation Senior Associate at PricewaterhouseCoopers Acceleration Centre (Kolkata) Private Limited, you will play a crucial role in supporting clients by ensuring compliance with accounting standards, implementing revenue recognition systems, and driving cross-functional collaboration to achieve business objectives. Your responsibilities will include performing revenue system implementation, data conversion, and optimizing revenue recognition processes. You will be an integral part of a team of problem solvers, working closely with clients and overseas Engagement Teams. To excel in this role, you must have in-depth knowledge of revenue recognition principles and accounting standards, including ASC 606/IFRS 15. Strong understanding of business processes, systems, and controls related to revenue recognition is essential. Experience with Revenue Management systems such as Zuora Revenue and Oracle RMCS, along with proficiency in Alteryx, SQL, and Microsoft Visio, is preferred. Excellent analytical skills, effective communication abilities, and interpersonal skills are key to success in this position. Your role will involve hands-on experience with data management for analytics, financial data analysis, data transformation, and data quality checks. You will be expected to leverage tools like MS-SQL, ACL, Excel, PowerPoint, and data manipulation technologies for successful project execution. Familiarity with data visualization tools like Power BI and Tableau will be advantageous. The ideal candidate will hold a Bachelor's degree in Accounting and Information System or a related field, along with a minimum of 4 years of relevant experience in revenue recognition roles, preferably in a public accounting firm or a large corporation. Possession of CPA or equivalent certification is preferred. PricewaterhouseCoopers values purpose-led and values-driven leadership at every level. By embracing the PwC Professional global leadership development framework, you will have a clear roadmap to success and progression in your career. Join us in our mission to navigate complex situations, foster client relationships, and drive innovation in the dynamic environment of our Delivery Center in Kolkata, India.,

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 - 2 Lacs

Hyderabad

Work from Office

Job Title: Senior Data Engineer Azure Databricks & Azure Stack Location: [Onsite - Hyderabad] Experience: 6-8 years Employment Type: Full-Time Job Summary: TechNavitas seeking a highly skilled Senior Data Engineer with 6-8 years of experience in designing and implementing modern data engineering solutions on Azure Cloud. The ideal candidate will have deep expertise in Azure Databricks, Azure Stack, and building data dashboards using Databricks. You will play a critical role in developing scalable, secure, and high-performance data pipelines that power advanced analytics and machine learning workloads. Key Responsibilities: Design and Develop Data Pipelines: Build and optimize robust ETL/ELT workflows using Azure Databricks to process large-scale datasets from diverse sources. Azure Stack Integration: Implement and manage data workflows within Azure Stack environments for hybrid cloud scenarios. Dashboards & Visualization: Develop interactive dashboards and visualizations in Databricks for business and technical stakeholders. Performance Optimization: Tune Spark jobs for performance and cost efficiency, leveraging Delta Lake, Parquet, and advanced caching strategies. Data Modeling: Design and maintain logical and physical data models that support structured and unstructured data needs. Collaboration: Work closely with data scientists, analysts, and business teams to understand requirements and deliver data solutions that enable insights. Security & Compliance: Ensure adherence to enterprise data security, privacy, and governance standards, especially in hybrid Azure environments. Automation & CI/CD: Implement CI/CD pipelines for Databricks workflows using Azure DevOps or similar tools. Required Skills and Experience: Technical Skills: Min 6-8 years of data engineering experience with strong focus on Azure ecosystem. Deep expertise in Azure Databricks (PySpark/Scala/SparkSQL) for big data processing. Solid understanding of Azure Stack Hub/Edge for hybrid cloud architecture. Hands-on experience with Delta Lake, data lakes, and data lakehouse architectures. Proficiency in developing dashboards within Databricks SQL and integrating with BI tools like Power BI or Tableau Strong knowledge of data modeling, data warehousing (e.g., Synapse Analytics), and ELT/ETL best practices. Experience with event-driven architectures and streaming data pipelines using Azure Event Hubs, Kafka, or Databricks Structured Streaming. Familiarity with Git, Azure DevOps, and CI/CD automation for data workflows. Soft Skills: Strong problem-solving and analytical thinking. Ability to communicate technical concepts effectively to non-technical stakeholders. Proven track record of working in Agile/Scrum teams. Preferred Qualifications: Experience working with hybrid or multi-cloud environments (Azure Stack + Azure Public Cloud). Knowledge of ML lifecycle and MLOps practices for data pipelines feeding ML models. Azure certifications such as Azure Data Engineer Associate or Azure Solutions Architect Expert. Why Join Us? Work on cutting-edge data engineering projects across hybrid cloud environments. Be part of a dynamic team driving innovation in big data and advanced analytics. Competitive compensation and professional growth opportunities.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 18 Lacs

Chennai

Remote

Role: Azure databricks Data Engineer Location : Remote(1 week onboarding in chennai ) Preffered : Tamilnadu Candidates Experience: 5-6years(5 yrs relevant in Data Engineering) Role Summary: The Offshore Technical Resource will support ongoing development and maintenance activities by delivering high-quality technical solutions. Key Responsibilities : Develop, test, and deploy technical components as per the specifications provided by the onshore team. Provide timely resolution of technical issues and production support tickets. Participate in code reviews, ensuring adherence to coding standards and best practices. Contribute to system integrations, data migrations, and configuration tasks as needed.Document technical specifications, procedures, and support guides. Collaborate with QA teams to support testing activities and defect resolution. Maintain effective communication with onshore leads to align on priorities and deliverables. Qualifications: Proficiency in AzureDatabricks(should be very strong), Spark, SQL, and Python for data engineering and remediation tasks. Strong problem-solving and debugging skills. Good verbal and written communication skills.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

8 - 13 Lacs

Pune, Anywhere in /Multiple Locations

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks.- Architect scalable and efficient data models and storage solutions on the Databricks platform.- Collaborate with architects and other teams to migrate current solution to use Databricks.- Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements.- Use best practices for data governance, security, and compliance on the Databricks platform.- Mentor junior engineers and provide technical guidance.- Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field.- 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform.- Proficiency in programming languages such as Python, Scala, or SQL.- Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark.- Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services.- Proven track record of delivering scalable and reliable data solutions in a fast-paced environment.- Excellent problem-solving skills and attention to detail.- Strong communication and collaboration skills with the ability to work effectively in cross-functional teams.- Good to have experience with containerization technologies such as Docker and Kubernetes.- Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

8.0 - 10.0 years

25 - 30 Lacs

Bengaluru

Work from Office

We are looking for an experienced Data Modelling professional, proficient in tools such as Erwin and ER/Studio. A strong understanding of Azure Databricks, Snowflake/Redshift, SAP HANA, and advanced SQL is required. Prior experience in leading teams is also preferred.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

8 - 14 Lacs

Kochi, Chennai, Coimbatore

Work from Office

Role Summary: The Offshore Technical Resource will support ongoing development and maintenance activities by delivering high-quality technical solutions. Key Responsibilities :Develop, test, and deploy technical components as per the specifications provided by the onshore team.Provide timely resolution of technical issues and production support tickets.Participate in code reviews, ensuring adherence to coding standards and best practices.Contribute to system integrations, data migrations, and configuration tasks as needed.Document technical specifications, procedures, and support guides.Collaborate with QA teams to support testing activities and defect resolution.Maintain effective communication with onshore leads to align on priorities and deliverables. Qualifications: Proficiency in AzureDatabricks(should be very strong), Spark, SQL, and Python for data engineering and remediation tasks.Strong problem-solving and debugging skills.Good verbal and written communication skills

Posted 2 weeks ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

Pune

Work from Office

Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Pune

Remote

Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 2 weeks ago

Apply

6.0 - 9.0 years

8 - 11 Lacs

Pune

Work from Office

About the job : Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 weeks ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Pune

Work from Office

The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 11 Lacs

Ahmedabad

Work from Office

Role Senior Databricks Engineer As a Mid Databricks Engineer, you will play a pivotal role in designing, implementing, and optimizing data processing pipelines and analytics solutions on the Databricks platform. You will collaborate closely with cross-functional teams to understand business requirements, architect scalable solutions, and ensure the reliability and performance of our data infrastructure. This role requires deep expertise in Databricks, strong programming skills, and a passion for solving complex engineering challenges. What you'll do : - Design and develop data processing pipelines and analytics solutions using Databricks. - Architect scalable and efficient data models and storage solutions on the Databricks platform. - Collaborate with architects and other teams to migrate current solution to use Databricks. - Optimize performance and reliability of Databricks clusters and jobs to meet SLAs and business requirements. - Use best practices for data governance, security, and compliance on the Databricks platform. - Mentor junior engineers and provide technical guidance. - Stay current with emerging technologies and trends in data engineering and analytics to drive continuous improvement. You'll be expected to have : - Bachelor's or master's degree in computer science, Engineering, or a related field. - 5 to 8 years of overall experience and 2+ years of experience designing and implementing data solutions on the Databricks platform. - Proficiency in programming languages such as Python, Scala, or SQL. - Strong understanding of distributed computing principles and experience with big data technologies such as Apache Spark. - Experience with cloud platforms such as AWS, Azure, or GCP, and their associated data services. - Proven track record of delivering scalable and reliable data solutions in a fast-paced environment. - Excellent problem-solving skills and attention to detail. - Strong communication and collaboration skills with the ability to work effectively in cross-functional teams. - Good to have experience with containerization technologies such as Docker and Kubernetes. - Knowledge of DevOps practices for automated deployment and monitoring of data pipelines.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Ahmedabad

Work from Office

About the job : Role : Microsoft Fabric Data Engineer Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 2 weeks ago

Apply

3.0 - 8.0 years

9 - 14 Lacs

Ahmedabad

Remote

Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 2 weeks ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Ahmedabad

Work from Office

Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Dear Candidate, GyanSys is looking for Azure Databricks/Data Engineer for our overseas customers consulting Projects based in Americas/Europe/APAC region. Please apply for the job role or Share your CV directly to kiran.devaraj@gyansys.com / Call @ 8867163603 to discuss the fitment in detail. Designation: Sr/Lead/Principal Consultant based on Experience) Experience: 5+ Yrs - relevant Location: Bangalore , ITPL Notice Period: Immediate or 30 days max Job Description: 5+ years Experience. We are seeking a Data Engineer with 5-10 years of experience in Databricks, Python, and API. The primary responsibility of this role is to migrate on-premises big data Spark and Impala/Hive scripts to the Databricks environment. The ideal candidate will have a strong background in data migration projects and be proficient in transforming ETL pipelines to Databricks. The role requires excellent problem-solving skills and the ability to work independently on complex data migration tasks. Experience with big data technologies and cloud platforms(Azure) is essential. Join our team to lead the migration efforts and optimize our data infrastructure on Databricks. Excellent problem-solving skills and a passion for data accessibility. Effective communication and collaboration skills. Experience with Agile methodologies. Kinldy apply only if your profile fits the above pre-requisites. Also, Please share this job post in your acquaintances as well.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Hyderabad

Work from Office

Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets Business Intelligence tools (preferredPower BI) DP-203 Certified.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies