Jobs
Interviews

1265 Azure Databricks Jobs - Page 15

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

10.0 - 14.0 years

35 - 40 Lacs

Hyderabad

Work from Office

Skills: Cloudera, Big Data, Hadoop, SPARK, Kafka, Hive, CDH Clusters Design and implement Cloudera-based dataplatforms, including cluster sizing, configuration, and optimization. Install, configure, and administer Cloudera Manager and CDP clusters, managingall aspects of the cluster lifecycle. Monitor and troubleshoot platform performance, identifying and resolving issuespromptly. Review the maintain the data ingestion and processing pipelines on the Clouderaplatform. Collaborate with data engineers and datascientists to design and optimize data models, ensuring efficient data storageand retrieval. Implement and enforce security measures for the Cloudera platform, including authentication, authorization, and encryption. Manage platform user access and permissions, ensuring compliance with dataprivacy regulations and internal policies. Experience in creating Technology Road Mapsfor Cloudera Platform. Stay up-to-date with the latest Cloudera and big datatechnologies, and recommend and implement relevant updates and enhancements tothe platform. Experience in Planning, testing, andexecuting upgrades involving Cloudera components and ensuring platformstability and security. Document platformconfigurations, processes, and procedures, and provide training and support toother team members as needed. Requirements Bachelor's degree in Computer Science, Engineering, or a related field. Proven experience as a Cloudera platform engineer or similar role, with astrong understanding of Cloudera Manager and CDH clusters. Expertise in designing, implementing, and maintaining scalable andhigh-performance data platforms using Cloudera technologies such as Hadoop, Spark, Hive, Kafka. Strong knowledge of big data concepts and technologies, data modeling, and data warehousing principles. Familiarity with data security and compliance requirements, and experience implementing security measures for Cloudera platforms. Proficiency in Linux system administration and scripting languages (e.g.,Shell, Python). Strong troubleshooting and problem-solving skills, with the ability to diagnoseand resolve platform issues quickly. Excellent communication and collaboration skills, with the ability to workeffectively in cross-functional teams. Experience on Azure Data Factory/Azure Databricks/ Azure Synapse is a plus.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 25 Lacs

Noida

Work from Office

We are seeking a skilled and proactive Azure Databricks Administrator to manage, monitor, and support our Databricks environment on Microsoft Azure. The ideal candidate will be responsible for system integrations, access control, user support, and CI/CD pipeline administration, ensuring a secure, efficient, and scalable data platform. Key Responsibilities: - System Integration & Monitoring: - Build, monitor, and support integrations between Databricks and enterprise systems such as LogRhythm, ServiceNow, and AppDynamics. - Ensure seamless data flow and alerting mechanisms across integrated platforms. - Security & Access Management: - Administer user and group access to the Databricks environment. - Implement and enforce security policies and role-based access controls (RBAC). - User Support & Enablement: - Provide initial system support and act as a point of contact (POC) for Databricks users. - Assist users with onboarding, workspace setup, and troubleshooting. - Vendor Coordination: - Engage with Databricks vendor support for issue resolution and platform optimization. - Platform Monitoring & Maintenance: - Monitor Databricks usage, performance, and cost. - Ensure the platform is up-to-date with the latest patches and features. - Database & CI/CD Administration: - Manage Databricks database configurations and performance tuning. - Administer and maintain CI/CD pipelines for Databricks notebooks and jobs. Required Skills & Qualifications: - Proven experience administering Azure Databricks in a production environment. - Strong understanding of Azure services, data engineering workflows, and DevOps practices. - Experience with integration tools and platforms like LogRhythm, ServiceNow, and AppDynamics. - Proficiency in CI/CD tools (e.g., Azure DevOps, GitHub Actions). - Familiarity with Databricks REST APIs, Terraform, or ARM templates is a plus. - Excellent problem-solving, communication, and documentation skills. Preferred Certifications: - Microsoft Certified: Azure Administrator Associate - Databricks Certification - Azure Data Engineer Associate

Posted 3 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Gurugram

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : Google Cloud Platform AdministrationMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Your role will be pivotal in shaping the direction of application projects and ensuring that they meet the needs of the organization and its clients. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to meet deadlines. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with Google Cloud Platform Administration.- Strong understanding of cloud computing concepts and architecture.- Experience in application design and development methodologies.- Familiarity with data integration and ETL processes. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Gurugram office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

4.0 - 6.0 years

8 - 8 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

1. 3+ years of relevant experience in Pyspark and Azure Databricks. 2. Proficiency in integrating, transforming, and consolidating data from various structured and unstructured data sources. 3. Good experience in SQL or native SQL query languages. 4. Strong experience in implementing Databricks notebooks using Python. 5. Good experience in Azure Data Factory, ADLS, Storage Services, Serverless architecture, Azure functions. 6. Experience in SSIS/ETL transformation processes. 7. Experience in Azure DevOps and CI/CD deployments 8. DP-203 certification Essential Skills: 1. Create Databricks notebook to process data 2. Work on integrating and consolidating data from multiple sources and load to ADLS 3. Understanding the customer needs. Location - MH / Bangalore, KA / Hyderabad, TL / Chennai, TN, Pune.

Posted 3 weeks ago

Apply

2.0 - 7.0 years

5 - 9 Lacs

Bengaluru

Work from Office

About The Role Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : Microsoft SQL Server, Microsoft Azure Analytics ServicesMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the development process and ensure successful project delivery. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Lead the design and development of applications.- Act as the primary point of contact for the project team.- Provide guidance and mentorship to junior team members.- Collaborate with stakeholders to gather requirements and ensure project alignment.- Ensure timely delivery of high-quality solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Data Services, Microsoft SQL Server, Microsoft Azure Analytics Services.- Strong understanding of cloud-based data services.- Experience in designing and implementing data solutions on Azure platform.- Knowledge of data warehousing concepts and ETL processes.- Hands-on experience with Azure data storage and processing services. Additional Information:- The candidate should have a minimum of 2 years of experience in Microsoft Azure Data Services.- This position is based at our Bengaluru office.- A 15 years full-time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive successful project outcomes. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring adherence to best practices in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously assess and improve application development processes to increase efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of cloud computing principles and architecture.- Experience with data integration and ETL processes.- Familiarity with application lifecycle management tools.- Ability to troubleshoot and resolve application issues effectively. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

8.0 - 13.0 years

0 - 2 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role & responsibilities Mandate Skills Azure Data Bricks • Spark • Python • SQL • Azure/GCP Lead Data Engineer Performs detailed design of complex applications and complex architecture components • May lead a small group of developers in configuring, programming, and testing • Fixes medium to complex defects and resolves performance problems • Accountable for service commitments at the individual request level for in-scope applications • Monitors, tracks, and participates ticket resolution for assigned tickets • Manages code reviews and mentors other developers Minimum Skills Required: Mandatory Skills Expert Data Engineer 2\5+ years of experience with Data Bricks 5+ years of experience with Spark 5+ years of experience with Python and SQL 5+ years of experience with Azure/GCP.

Posted 3 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders. You will also be responsible for troubleshooting issues and providing guidance to team members, ensuring that the applications meet the required standards and specifications. Your role will be pivotal in driving innovation and efficiency within the application development process, fostering a collaborative environment that encourages creativity and problem-solving. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training and development opportunities for team members to enhance their skills.- Monitor project progress and implement necessary adjustments to ensure timely delivery. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with cloud computing platforms.- Strong understanding of application development methodologies.- Familiarity with data integration and ETL processes.- Experience in performance tuning and optimization of applications. Additional Information:- The candidate should have minimum 7.5 years of experience in Microsoft Azure Databricks.- This position is based in Pune.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

15.0 - 20.0 years

10 - 14 Lacs

Pune

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure that application development aligns with business objectives, overseeing project timelines, and facilitating communication among stakeholders to drive project success. You will also engage in problem-solving activities, providing guidance and support to your team while ensuring that best practices are followed throughout the development process. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Facilitate regular team meetings to discuss progress, challenges, and solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Good To Have Skills: Experience with cloud computing platforms.- Strong understanding of application development methodologies.- Familiarity with data integration and ETL processes.- Experience in performance tuning and optimization of applications. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Azure Databricks.- This position is based at our Pune office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

3.0 - 8.0 years

1 - 5 Lacs

Bengaluru

Work from Office

About The Role Project Role : Application Tech Support Practitioner Project Role Description : Act as the ongoing interface between the client and the system or application. Dedicated to quality, using exceptional communication skills to keep our world class systems running. Can accurately define a client issue and can interpret and design a resolution based on deep product knowledge. Must have skills : Microsoft 365, Microsoft PowerShell, Microsoft 365 Security & Compliance Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Tech Support Practitioner, you will serve as a vital link between the client and the system or application. Your typical day will involve engaging with clients to understand their needs, troubleshooting issues, and ensuring that our systems operate seamlessly. You will utilize your exceptional communication skills to provide high-quality support, ensuring that client concerns are addressed promptly and effectively. Your role will require a deep understanding of the product to accurately diagnose issues and design appropriate resolutions, contributing to the overall success of our operations and client satisfaction. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Facilitate training sessions for team members to enhance their understanding of the systems.- Develop and maintain comprehensive documentation for troubleshooting processes and solutions. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft 365, Microsoft PowerShell, Microsoft 365 Security & Compliance.- Strong understanding of cloud-based applications and their integration.- Experience with system monitoring tools to ensure optimal performance.- Ability to analyze and interpret client feedback to improve service delivery.- Familiarity with security protocols and compliance standards relevant to Microsoft 365. Additional Information:- The candidate should have minimum 3 years of experience in Microsoft 365.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 14 Lacs

Ahmedabad

Work from Office

About The Role Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Microsoft Azure Analytics Services Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : BE Summary :As an Application Lead for Packaged Application Development, you will be responsible for designing, building, and configuring applications using Microsoft Azure Analytics Services. Your typical day will involve leading the effort to deliver high-quality applications, acting as the primary point of contact for the project team, and ensuring timely delivery of project milestones. Roles & Responsibilities:- Lead the effort to design, build, and configure applications using Microsoft Azure Analytics Services.- Act as the primary point of contact for the project team, ensuring timely delivery of project milestones.- Collaborate with cross-functional teams to ensure the successful delivery of high-quality applications.- Provide technical guidance and mentorship to team members, ensuring adherence to best practices and standards. Professional & Technical Skills: - Must To Have Skills: Strong experience with Microsoft Azure Analytics Services.- Good To Have Skills: Experience with other Azure services such as Azure Data Factory, Azure Databricks, and Azure Synapse Analytics.- Experience in designing, building, and configuring applications using Microsoft Azure Analytics Services.- Must have databricks and pyspark Skills.- Strong understanding of data warehousing concepts and best practices.- Experience with ETL processes and tools such as SSIS or Azure Data Factory.- Experience with SQL and NoSQL databases.- Experience with Agile development methodologies. Additional Information:- The candidate should have a minimum of 5 years of experience in Microsoft Azure Analytics Services.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality applications.- This position is based at our Bengaluru office. Qualification BE

Posted 3 weeks ago

Apply

8.0 - 13.0 years

25 - 40 Lacs

Chennai

Work from Office

We are seeking a skilled Data Scientist to join our team Job Description: The ideal candidate will have a strong background in data analytics, statistics, programming, and data visualisation, as well as experience using data to generate actionable insights, drive business decisions, and productionize machine learning models. The candidate will be responsible for working with cross-functional teams to identify opportunities for leveraging data to solve business problems, developing and implementing analytical models, and productionizing models for deployment in a production environment. Responsibilities: Work with cross-functional teams to identify opportunities for leveraging data to solve business problems Collect, clean, and transform data from various sources Analyse and interpret large datasets using statistical techniques and data visualisation tools Develop and maintain predictive models using Python Evaluate the performance of analytical models and make recommendations for improvements Productionize machine learning models for deployment in a production environment Develop and maintain code for model deployment, monitoring, and retraining Explain the model to both technical and non-technical stakeholders, including the model assumptions, limitations, and potential impact on the business Communicate insights and model explanations to both technical and non-technical stakeholders using compelling data visualisations and storytelling techniques Develop presentations and reports to share insights and model explanations with senior management and other stakeholders Requirements: Master's or Ph.D. degree in Computer Science, Engineering, Statistics, Mathematics, or related field At least 8+ years of experience working as a data scientist or similar role Strong programming skills in Python Strong understanding of statistical analysis techniques and data visualization tools Experience working with large datasets, including data cleaning, transformation, and visualization Experience productionizing machine learning models for deployment in a production environment Excellent communication skills, with the ability to effectively communicate insights and model explanations to both technical and non-technical stakeholders Strong problem-solving skills and attention to detail Ability to work collaboratively in a team environment Expectation for lead/principal DS Understand the residential real estate business process Understand the various ML models that are deployed across the customer lifecycle Understand the SFDC source system Understand the Azure ML services Get familiar with the vendor partners for augmented analytics Take part in the weekly cadence with business teams Take end-to-end ownership of all the ML models Drive ML adoption with the business teams by sharing the insights both at the aggregate & lead levels Identify any gaps, if any, and either retrain or redevelop the ML models. Behavioral competency Owns deliverables end-to-end; takes responsibility for both success and failure. Drives cross-functional alignment, even when not directly managing stakeholders. Can clearly explain complex models and insights to non-technical stakeholders. Makes progress in ambiguous, noisy, or incomplete data environments. Believes in experimentation, feedback loops, and continuous improvement. Breaks down problems methodically rather than relying on templates. Tech stack: 1. Databricks 2. Azure microservices - Azure functions for model deployment 3. Visual basic code ide 4. Jupyter notebook 5. Python 6. Sql & Pyspark for data processing 7. Github for model versioning and setting up CICD pipelines Ready to start within 15 days? Send your resume to radhika@tvsd.ai

Posted 3 weeks ago

Apply

4.0 - 6.0 years

5 - 15 Lacs

Bengaluru

Work from Office

About Apexon: Apexon is a digital-first technology services firm specializing in accelerating business transformation and delivering human-centric digital experiences. We have been meeting customers wherever they are in the digital lifecycle and helping them outperform their competition through speed and innovation.Apexon brings together distinct core competencies in AI, analytics, app development, cloud, commerce, CX, data, DevOps, IoT, mobile, quality engineering and UX, and our deep expertise in BFSI, healthcare, and life sciences to help businesses capitalize on the unlimited opportunities digital offers. Our reputation is built on a comprehensive suite of engineering services, a dedication to solving clients’ toughest technology problems, and a commitment to continuous improvement. Backed by Goldman Sachs Asset Management and Everstone Capital, Apexon now has a global presence of 15 offices (and 10 delivery centers) across four continents. We enable #HumanFirstDigital Job Title : Databricks ETL Developer Experience : 4–6 Years Location: Hybrid, preferably in Bangalore Job Description: We are seeking a skilled Databricks ETL Developer with 4 to 6 years of experience in building and maintaining scalable data pipelines and transformation workflows on the Azure Databricks platform. Key Responsibilities: Design, develop, and optimize ETL pipelines using Azure Databricks (Spark). Ingest data from various structured and unstructured sources (Azure Data Lake, SQL DBs, APIs). Implement data transformation and cleansing logic in PySpark or Scala. Collaborate with data architects, analysts, and business stakeholders to understand data requirements. Ensure data quality, performance tuning, and error handling in data workflows. Schedule and monitor ETL jobs using Azure Data Factory or Databricks Workflows. Participate in code reviews and maintain coding best practices. Required Skills: Hands-on experience with Azure Databricks, Spark (PySpark/Scala). Strong ETL development experience handling large-scale data. Proficient in SQL and working with relational databases. Familiarity with Azure Data Lake, Data Factory, Delta Lake. Experience with version control tools like Git. Good understanding of data warehousing concepts and data modeling. Preferred: Experience in CI/CD for data pipelines. Exposure to BI tools like Power BI for data validation. Our Commitment to Diversity & Inclusion: Did you know that Apexon has been Certified™ by Great Place To Work®, the global authority on workplace culture, in each of the three regions in which it operates: USA (for the fourth time in 2023), India (seven consecutive certifications as of 2023), and the UK.Apexon is committed to being an equal opportunity employer and promoting diversity in the workplace. We take affirmative action to ensure equal employment opportunity for all qualified individuals. Apexon strictly prohibits discrimination and harassment of any kind and provides equal employment opportunities to employees and applicants without regard to gender, race, color, ethnicity or national origin, age, disability, religion, sexual orientation, gender identity or expression, veteran status, or any other applicable characteristics protected by law. You can read about our Job Applicant Privacy policy here Job Applicant Privacy Policy (apexon.com) Our Perks and Benefits: Our benefits and rewards program has been thoughtfully designed to recognize your skills and contributions, elevate your learning/upskilling experience and provide care and support for you and your loved ones. As an Apexon Associate, you get continuous skill-based development, opportunities for career advancement, and access to comprehensive health and well-being benefits and assistance. We also offer: o Group Health Insurance covering family of 4 Term Insurance and Accident Insurance Paid Holidays & Earned Leaves Paid Parental LeaveoLearning & Career Development Employee Wellness

Posted 3 weeks ago

Apply

6.0 - 10.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Exp - 6 to 10 Years Role - Azure Data Engineer Position - Permanent FTE Company - Data Analytics MNC - Global Leader Locations - Pune, Hyderabad, Bengaluru Mode - Hybrid (2-3 days from office) MUST HAVE - - Very Strong Python Coding skills - Excellent SQL skills - Strong PySpark skills - In Depth hands-on experience in Azure Databricks & Data Factory - Strong knowledge of Datawarehouse Important Note - Candidates must have PF in all companies worked thought out career and under One UAN.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

0 Lacs

Pune

Work from Office

NOTE - Only candidates currently based in Pune will be considered Experience Range - 4-8 Years. We are hiring an experienced Azure Data Engineer to design and deliver scalable data engineering solutions on Microsoft Azure. Youll work on large-scale data platform projects using ADF, Databricks, and Azure Data Lake in a secure and collaborative environment. Roles and Responsibilities : Design, build and deploy robust data pipelines using Azure Data Factory, Azure Data Flows, Azure Databricks, and Azure SQL. Implement scalable data ingestion and transformation logic with PySpark. Develop APIs using Azure Function Apps and integrate workflows via Logic Apps. Implement Lakehouse/Data Warehouse architecture on Azure. Work with Azure DevOps to set up CI/CD pipelines for data components. Optimize performance and capacity of ADF and Databricks pipelines Preferred candidate profile

Posted 3 weeks ago

Apply

5.0 - 10.0 years

4 - 6 Lacs

Kochi, Pune, Chennai

Hybrid

Required Skills: 5+Years of experience Azure Databricks PySpark Azure Data Factory

Posted 3 weeks ago

Apply

6.0 - 10.0 years

17 - 32 Lacs

Ghaziabad, Hyderabad, Delhi / NCR

Hybrid

Job Role: Azure Data Engineer Location: Hyderabad Experience: 5 to 10 years Skill Required: Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.). Pyspark, SQL, Python. Job Responsibilities: Work closely with source data application teams and product owners to design, implement and support analytics solutions that provide insights to make better decisions Implement data migration and data engineering solutions using Azure products and services: (Azure Data Lake Storage, Azure Data Factory, Azure Functions, Event Hub, Azure Stream Analytics, Azure Databricks, etc.) and traditional data warehouse tools Perform multiple aspects involved in the development lifecycle design, cloud engineering (Infrastructure, network, security, and administration), ingestion, preparation, data modeling, testing, CICD pipelines, performance tuning, deployments, consumption, BI, alerting, prod support Provide technical leadership and collaborate within a team environment as well as work independently Be a part of a DevOps team that completely owns and supports their product Implement batch and streaming data pipelines using cloud technologies Leads development of coding standards, best practices and privacy and security guidelines. Mentors' others on technical and domain skills to create multi-functional teams Minimum Qualifications: 1. Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training 2. 3 years Data Engineering experience using SQL 3. 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. 4. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies

Posted 3 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

As an Azure Data Factory Engineer at Aspire Systems, you will be responsible for designing, developing, and maintaining robust data pipelines and ETL processes. Your role will involve implementing and optimizing data storage solutions in data warehouses and data lakes. You should have strong experience with Microsoft Azure tools, including SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake, coupled with excellent communication skills. Key Responsibilities: - Design, develop, and maintain robust data pipelines and ETL processes. - Implement and optimize data storage solutions in data warehouses and data lakes. - Collaborate with cross-functional teams to understand data requirements and deliver high-quality data solutions. - Utilize Microsoft Azure tools for data integration, transformation, and analysis. - Develop and maintain reports and dashboards using Power BI and other analytics tools. - Ensure data integrity, consistency, and security across all data systems. - Optimize database and query performance to support data-driven decision-making. Qualifications: - 7-10 years of professional experience in data engineering or a related field. - Profound expertise in SQL, T-SQL, database design, and data warehousing principles. - Strong experience with Microsoft Azure tools, including SQL Azure, Azure Data Factory, Azure Databricks, and Azure Data Lake. - Proficiency in Python, PySpark, and PySQL for data processing and analytics tasks. - Experience with Power BI and other reporting and analytics tools. - Demonstrated knowledge of OLAP, data warehouse design concepts, and performance optimizations in database and query processing. - Excellent problem-solving, analytical, and communication skills. Join Aspire Systems, a global technology services firm that serves as a trusted technology partner for over 275 customers worldwide. Aspire collaborates with leading enterprises in Banking, Insurance, Retail, and ISVs, helping them leverage technology to thrive in the digital era. With a focus on Software Engineering & Digital Technologies, Aspire enables companies to operate smart business models. The company's core philosophy of Attention. Always. reflects its commitment to providing exceptional care and attention to customers and employees. Aspire Systems is CMMI Level 3 certified and has a global workforce of over 4900 employees, operating across North America, LATAM, Europe, Middle East, and Asia Pacific. Aspire Systems has been consistently recognized as one of the Top 100 Best Companies to Work For by the Great Place to Work Institute for 12 consecutive years. Explore more about Aspire Systems at https://www.aspiresys.com/.,

Posted 3 weeks ago

Apply

6.0 - 9.0 years

9 - 13 Lacs

Chennai

Work from Office

Experience : 6+ years as Azure Data Engineer including at least 1 E2E Implementation in Microsoft Fabric. Responsibilities : - Lead the design and implementation of Microsoft Fabric-centric data platforms and data warehouses. - Develop and optimize ETL/ELT processes within the Microsoft Azure ecosystem, effectively utilizing relevant Fabric solutions. - Ensure data integrity, quality, and governance throughout Microsoft Fabric environment. - Collaborate with stakeholders to translate business needs into actionable data solutions. - Troubleshoot and optimize existing Fabric implementations for enhanced performance. Skills : - Solid foundational knowledge in data warehousing, ETL/ELT processes, and data modeling (dimensional, normalized). - Design and implement scalable and efficient data pipelines using Data Factory (Data Pipeline, Data Flow Gen 2 etc) in Fabric, Pyspark notebooks, Spark SQL, and Python. This includes data ingestion, data transformation, and data loading processes. - Experience ingesting data from SAP systems like SAP ECC/S4HANA/SAP BW etc will be a plus. - Nice to have ability to develop dashboards or reports using tools like Power BI. Coding Fluency : - Proficiency in SQL, Python, or other languages for data scripting, transformation, and automation.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

12 - 18 Lacs

Bengaluru

Work from Office

Mode: Contract As an Azure Data Architect, you will: Lead architectural design and migration strategies, especially from Oracle to Azure Data Lake Architect and build end-to-end data pipelines leveraging Databricks, Spark, and Delta Lake Design secure, scalable data solutions integrating ADF, SQL Data Warehouse, and on-prem/cloud systems Optimize cloud resource usage and pipeline performance Set up CI/CD pipelines with Azure DevOps Mentor team members and align architecture with business needs Qualifications: 10-15 years in Data Engineering/Architecture roles Extensive hands-on with: Databricks, Azure Data Factory, Azure SQL Data Warehouse Data integration, migration, cluster configuration, and performance tuning Azure DevOps and cloud monitoring tools Excellent interpersonal and stakeholder management skills.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

10 - 14 Lacs

Kolkata

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

Bengaluru

Remote

Role : Data Modeler Lead Location : Remote Experience : 10years+ Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 3 weeks ago

Apply

3.0 - 8.0 years

9 - 14 Lacs

Mumbai

Remote

Role : Data Modeler Lead Location : Remote Experience : 10years+ Healthcare experience is Mandatory Position Overview : We are seeking an experienced Data Modeler/Lead with deep expertise in health plan data models and enterprise data warehousing to drive our healthcare analytics and reporting initiatives. The candidate should have hands-on experience with modern data platforms and a strong understanding of healthcare industry data standards. Key Responsibilities : Data Architecture & Modeling : - Design and implement comprehensive data models for health plan operations, including member enrollment, claims processing, provider networks, and medical management - Develop logical and physical data models that support analytical and regulatory reporting requirements (HEDIS, Stars, MLR, risk adjustment) - Create and maintain data lineage documentation and data dictionaries for healthcare datasets - Establish data modeling standards and best practices across the organization Technical Leadership : - Lead data warehousing initiatives using modern platforms like Databricks or traditional ETL tools like Informatica - Architect scalable data solutions that handle large volumes of healthcare transactional data - Collaborate with data engineers to optimize data pipelines and ensure data quality Healthcare Domain Expertise : - Apply deep knowledge of health plan operations, medical coding (ICD-10, CPT, HCPCS), and healthcare data standards (HL7, FHIR, X12 EDI) - Design data models that support analytical, reporting and AI/ML needs - Ensure compliance with healthcare regulations including HIPAA/PHI, and state insurance regulations - Partner with business stakeholders to translate healthcare business requirements into technical data solutions Data Governance & Quality : - Implement data governance frameworks specific to healthcare data privacy and security requirements - Establish data quality monitoring and validation processes for critical health plan metrics - Lead eAorts to standardize healthcare data definitions across multiple systems and data sources Required Qualifications : Technical Skills : - 10+ years of experience in data modeling with at least 4 years focused on healthcare/health plan data - Expert-level proficiency in dimensional modeling, data vault methodology, or other enterprise data modeling approaches - Hands-on experience with Informatica PowerCenter/IICS or Databricks platform for large-scale data processing - Strong SQL skills and experience with Oracle Exadata and cloud data warehouses (Databricks) - Proficiency with data modeling tools (Hackolade, ERwin, or similar) Healthcare Industry Knowledge : - Deep understanding of health plan data structures including claims, eligibility, provider data, and pharmacy data - Experience with healthcare data standards and medical coding systems - Knowledge of regulatory reporting requirements (HEDIS, Medicare Stars, MLR reporting, risk adjustment) - Familiarity with healthcare interoperability standards (HL7 FHIR, X12 EDI) Leadership & Communication : - Proven track record of leading data modeling projects in complex healthcare environments - Strong analytical and problem-solving skills with ability to work with ambiguous requirements - Excellent communication skills with ability to explain technical concepts to business stakeholders - Experience mentoring team members and establishing technical standards Preferred Qualifications : - Experience with Medicare Advantage, Medicaid, or Commercial health plan operations - Cloud platform certifications (AWS, Azure, or GCP) - Experience with real-time data streaming and modern data lake architectures - Knowledge of machine learning applications in healthcare analytics - Previous experience in a lead or architect role within healthcare organization

Posted 3 weeks ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Mumbai

Work from Office

The Azure Data Bricks Engineer plays a critical role in establishing and maintaining an efficient data ecosystem within an organization. This position is integral to the development of data solutions leveraging the capabilities of Microsoft Azure Data Bricks. The engineer will work closely with data scientists and analytics teams to facilitate the transformation of raw data into actionable insights. With increasing reliance on big data technologies and cloud-based solutions, having an expert on board is vital for driving data-driven decision-making processes. The Azure Data Bricks Engineer will also be responsible for optimizing data workflows, ensuring data quality, and deploying scalable data solutions that align with organizational goals. This role requires not only technical expertise in handling large volumes of data but also the ability to collaborate across various functional teams to enhance operational efficiency. - Design and implement scalable data pipelines using Azure Data Bricks. - Develop ETL processes to efficiently extract, transform, and load data. - Collaborate with data scientists and analysts to define and refine data requirements. - Optimize Spark jobs for performance and efficiency. - Monitor and troubleshoot production workflows and jobs. - Implement data quality checks and validation processes. - Create and maintain technical documentation related to data architecture. - Conduct code reviews to ensure best practices are followed. - Work on integrating data from various sources including databases, APIs, and third-party services. - Utilize SQL and Python for data manipulation and analysis. - Collaborate with DevOps teams to deploy and maintain data solutions. - Stay updated with the latest trends and updates in Azure Data Bricks and related technologies. - Facilitate data visualization initiatives for better data-driven insights. - Provide training and support to team members on data tools and practices. - Participate in cross-functional projects to enhance data sharing and access. - Bachelor's degree in Computer Science, Information Technology, or a related field. - Minimum of 6 years of experience in data engineering or a related domain. - Strong expertise in Azure Data Bricks and data lake concepts. - Proficiency with SQL, Python, and Spark. - Solid understanding of data warehousing concepts. - Experience with ETL tools and frameworks. - Familiarity with cloud platforms such as Azure, AWS, or Google Cloud. - Excellent problem-solving and analytical skills. - Ability to work collaboratively in a diverse team environment. - Experience with data visualization tools such as Power BI or Tableau. - Strong communication skills with the ability to convey technical concepts to non-technical stakeholders. - Knowledge of data governance and data quality best practices. - Hands-on experience with big data technologies and frameworks. - A relevant certification in Azure is a plus. - Ability to adapt to changing technologies and evolving business requirements.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

5 - 8 Lacs

Bengaluru

Remote

As a Senior Azure Data Engineer, your responsibilities will include: Building scalable data pipelines using Databricks and PySpark Transforming raw data into usable business insights Integrating Azure services like Blob Storage, Data Lake, and Synapse Analytics Deploying and maintaining machine learning models using MLlib or TensorFlow Executing large-scale Spark jobs with performance tuning on Spark Pools Leveraging Databricks Notebooks and managing workflows with MLflow Qualifications: Bachelors/Masters in Computer Science, Data Science, or equivalent 7+ years in Data Engineering, with 3+ years in Azure Databricks Strong hands-on in: PySpark, Spark SQL, RDDs, Pandas, NumPy, Delta Lake Azure ecosystem: Data Lake, Blob Storage, Synapse Analytics

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies