Jobs
Interviews

1265 Azure Databricks Jobs - Page 20

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

36 - 45 Lacs

Gurugram

Work from Office

Senior Data Engineer | Gurugram (Onsite) 5+ yrs exp with Azure Data Services, Databricks, PySpark, SQL & Soda. Build scalable data pipelines, ensure data quality, support governance, CI/CD, mentor juniors, Kafka & Airflow exposure is a plus.

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 16 Lacs

Gurugram

Hybrid

5+ years of total experience into IT industry as a developer/senior developer/data engineer 3+ years of experience of working extensively with Azure services such as Azure Data Factory, Azure Synapse, Azure Datalake, Azure SQL, Data Management Required Candidate profile Call Vikas 8527840989 Email vikasimaginators@gmail.com 3+ years of experience working extensively with Azure SQL, MS SQL Server and good exposure into writing complex SQL queries.

Posted 1 month ago

Apply

9.0 - 14.0 years

20 - 35 Lacs

Pune, Chennai, Bengaluru

Hybrid

Role & responsibilities Design and implement end-to-end data solutions on Microsoft Azure, including data lakes, data warehouses, and ETL/ELT processes. Develop scalable and efficient data architectures that support large-scale data processing and analytics workloads. Ensure high performance, security, and compliance within Azure data solutions. Know various techniques (lakehouse, warehouse) and have experience implementing them. Evaluate and choose appropriate Azure services such as Azure SQL Database, Azure Synapse Analytics, Azure Data Lake Storage, Azure Databricks (configuring, costing, etc), Unity Catalog, and Azure Data Factory. Should have deep knowledge and hands-on experience with these Azure Data Services. Ideally, knowledgeable and experienced with Microsoft Fabric. Work closely with business and technical teams to understand and translate data needs into robust, scalable data architecture solutions. Experience with data governance, data privacy, and compliance requirements. Excellent communication and interpersonal skills, with the ability to collaborate effectively with cross-functional teams. Provide expertise and leadership to the development team implementing data engineering solutions. Collaborate with Data Scientists, Analysts, and other stakeholders to ensure data architectures align with business goals and data analysis requirements. Optimize cloud-based data infrastructure for performance, cost-effectiveness, and scalability. Analyze data workloads and recommend optimizations for performance tuning, cost management, and reducing complexity. Monitor and address any issues related to performance and availability in cloud-based data solutions. Experience in programming languages (e.g., SQL, Python, Scala). Hands-on experience using MS SQL Server, Oracle, or similar RDBMS platform. Experience in Azure DevOps, CI/CD pipeline development Hands-on experience working at a high level in architecture, data science, or combination. In-depth understanding of database structure principles Distributed Data Processing of big data batch or streaming pipelines. Familiarity with data visualization tools (e.g., Power BI, Tableau, etc.) Data Modeling and strong analytics skills. The candidate must be able to take OLTP data structures and convert them into Star Schema. Ideally, the candidate should have DBT experience along with data modeling experience. Problem-solving attitude, Highly selfmotivated, selfdirected, and attentive to detail, Ability to prioritize and execute tasks effectively. Attitude and aptitude are highly important at Hitachi; we are a very collaborative group. Preferred candidate profile Azure SQL Data Warehouse Azure Data Factory Azure Data Lake Azure Analysis Services Databricks/Spark Python or Scala (Python preferred) Data Modeling Power BI Database migration from legacy systems to new solutions Design conceptual, logical and physical data models using tools like ER Studio, Erwin

Posted 1 month ago

Apply

6.0 - 8.0 years

8 - 12 Lacs

Kolkata

Work from Office

Job Summary : We are seeking an experienced Data Engineer with strong expertise in Databricks, Python, PySpark, and Power BI, along with a solid background in data integration and the modern Azure ecosystem. The ideal candidate will play a critical role in designing, developing, and implementing scalable data engineering solutions and pipelines. Key Responsibilities : - Design, develop, and implement robust data solutions using Azure Data Factory, Databricks, and related data engineering tools. - Build and maintain scalable ETL/ELT pipelines with a focus on performance and reliability. - Write efficient and reusable code using Python and PySpark. - Perform data cleansing, transformation, and migration across various platforms. - Work hands-on with Azure Data Factory (ADF) for at least 1.5 to 2 years. - Develop and optimize SQL queries, stored procedures, and manage large data sets using SQL Server, T-SQL, PL/SQL, etc. - Collaborate with cross-functional teams to understand business requirements and provide data-driven solutions. - Engage directly with clients and business stakeholders to gather requirements, suggest optimal solutions, and ensure successful delivery. - Work with Power BI for basic reporting and data visualization tasks. - Apply strong knowledge of data warehousing concepts, modern data platforms, and cloud-based analytics. - Adhere to coding standards and best practices, including thorough documentation and testing (unit, integration, performance). - Support the operations, maintenance, and enhancement of existing data pipelines and architecture. - Estimate tasks and plan release cycles effectively. Required Technical Skills : - Languages & Frameworks : Python, PySpark - Cloud & Tools : Azure Data Factory, Databricks, Azure ecosystem - Databases : SQL Server, T-SQL, PL/SQL - Reporting & BI Tools : Power BI (PBI) - Data Concepts : Data Warehousing, ETL/ELT, Data Cleansing, Data Migration - Other : Version control, Agile methodologies, good problem-solving skills Preferred Qualifications : - Experience with coding in Pysense within Databricks (added advantage) - Solid understanding of cloud data architecture and analytics processes - Ability to independently initiate and lead conversations with business stakeholders

Posted 1 month ago

Apply

5.0 - 10.0 years

25 - 35 Lacs

Pune

Hybrid

Key Responsibilities: Strong programming skills in Python, PySpark, SQL for data processing and automation. Experience with Databricks and Snowflake (preferred) for building and maintaining data pipelines. Experience with Machine Learning model development and Generative/Agentic AI frameworks (e.g. LLMs, Transformers, LangChain) especially in the Data Management space Experience working with REST APIs & JSON for service integration Experience working with cloud-based platforms such as Azure, AWS, or GCP Power BI dashboard development experience is a plus. Some Experience with Machine LearningRole & responsibilities

Posted 1 month ago

Apply

9.0 - 11.0 years

0 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Snowflake, SQL, Stored Procs, Azure Data Bricks, PySpark, Unity Catalog, Purview, Data Build Tool (DBT), Lakehouse, Delta Tables, Optimization and Troubleshooting skills, Metadata Drven Framework. Good to have Security Knowledge, PowerBI, Scala

Posted 1 month ago

Apply

12.0 - 16.0 years

18 - 25 Lacs

Thane

Work from Office

Architecting modern Data platform Experience in MDM platform Manage end to end deliveries for data Engineering, EDW and Data Lake platform. Data modelling Maintain robust data catalogue. Manage Azure cloud platform for Data and Analytics

Posted 1 month ago

Apply

3.0 - 7.0 years

10 - 16 Lacs

Gurugram

Work from Office

5+ years of total experience into IT industry as a developer/senior developer/data engineer • 3+ years of experience of working extensively with Azure services such as Azure Data Factory, Azure Synapse and Azure Datalake Required Candidate profile 3+ years of experience working extensively with Azure SQL, MS SQL Server and good exposure into writing complex SQL queries. Call on 7042331616 or drop cv on supreet.imaginators@gmail.com

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Key Skills Python Azure Lead Python, Flask, Azure functions, Team handling We are looking for • 10+ years relevant data engineering hands on work experience- data ingestion, processing, exploratory analysis to build solutions that deliver value through data as an asset. • Data engineer build ,test and deploy data pipelines efficiently and reliably move data across systems and should be top of latest architectural trends on AZURE cloud. • Folks who understand parallel and distributed processing, storage, concurrency, fault tolerant systems. • Folks who thrive on new technologies, able to adapt and learn easily to meet the needs of next generation engineering challenges. Technical Skills (Must-Have) • Applied experience with distributed data processing frameworks - Spark , Databricks with Python and SQL • Must have worked at least 2 end-end data analytics projects with Databricks Configuration , Unity Catalog, Delta Sharing and medallion architecture. • Applied experience with Azure Data services ADLS , Delta Lake , Delta Live Tables , Azure Storage, RBAC • Applied experience with Unit Testing and System Integration Testing using Python framework. • Applied experience with Devops to design and deploy CI/CD pipelines using Jenkins. • AZURE Data Engineering(DP203) or DATABRICKS certification • Prior working experience with high performance agile team - Scrum , JIRA ,JFrog and Confluence

Posted 1 month ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Work from Office

Role & responsibilities Experience in Databricks, PL/SQL, PySpark, Python, and Azure Data Factory (ADF). Experience in designing, developing, and maintaining data pipelines and data streams. Experience in moving/transforming data across layers (Bronze, Silver, Gold) using ADF, Python, and PySpark. Experience in working with stakeholders to understand their data needs and provide solutions. Experience in collaborating with other teams to ensure data quality and consistency. Experience in developing and maintaining data models and data dictionaries. Optimize ETL processes for performance and scalability. Experience in developing and maintaining data integrity and accuracy Experience in developing and maintaining data governance policies and procedures. Experience in developing and maintaining data security policies and procedures. Good understanding of deployment processes Ability to manage customer handling Flexible for support and maintenance type of project Notice Period : Immediate to 15 days PAN INDIA

Posted 1 month ago

Apply

9.0 - 14.0 years

20 - 25 Lacs

Bengaluru

Work from Office

Location: Bangalore Exp:8+ years Work Mode: Hybrid Hands-on experience with: Azure Data Factory (ADF) Azure Synapse Analytic Azure SQL Database / SQL Server Azure Databricks or Apache Spark Azure Blob Storage / Data Lake Storage Gen2 Strong SQL skills and experience in performance tuning. Familiarity with data modeling (star/snowflake schemas) and ETL best practices.

Posted 1 month ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Pune, Chennai, Bengaluru

Work from Office

Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure Fabric technology with strong project track record In this role you will play a key role in Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your profile Design, develop, and maintain data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse Implement ETL solutions to integrate data from various sources into Azure Data Lake and Data Warehouse Hands-on experience with SQL, Python, PySpark for data processing Expertise in building Power BI dashboards and reports Strong DAX and Power Query skills Experience in Power BI Service, Gateways, and embedding reports Develop Power BI datasets, semantic models, and row-level security for data access control What youll love about working here You can shape your career with us. We offer a range of career paths and internal opportunities within Capgemini group. You will also get personalized career guidance from our leaders. You will get comprehensive wellness benefits including health checks, telemedicine, insurance with top-ups, elder care, partner coverage or new parent support via flexible work. You will have the opportunity to learn on one of the industry's largest digital learning platforms, with access to 250,000+ courses and numerous certifications. Were committed to ensure that people of all backgrounds feel encouraged and have a sense of belonging at Capgemini. You are valued for who you are, and you can bring your original self to work . Every Monday, kick off the week with a musical performance by our in-house band - The Rubber Band. Also get to participate in internal sports events , yoga challenges, or marathons. At Capgemini, you can work on cutting-edge projects in tech and engineering with industry leaders or create solutions to overcome societal and environmental challenges. About Capgemini Location - Bengaluru,Pune,Chennai,Mumbai

Posted 1 month ago

Apply

15.0 - 20.0 years

17 - 20 Lacs

Mumbai

Work from Office

This role requires deep understanding of data warehousing, business intelligence (BI), and data governance principles, with strong focus on the Microsoft technology stack. Data Architecture Develop and maintain the overall data architecture, including data models, data flows, data quality standards. Design and implement data warehouses, data marts, data lakes on Microsoft Azure platform Business Intelligence Design and develop complex BI reports, dashboards, and scorecards using Microsoft Power BI. Data Engineering Work with data engineers to implement ETL/ELT pipelines using Azure Data Factory. Data Governance Establish and enforce data governance policies and standards. Primary Skills Experience 15+ years of relevant experience in data warehousing, BI, and data governance. Proven track record of delivering successful data solutions on the Microsoft stack. Experience working with diverse teams and stakeholders. Required Skills and Experience Technical Skills: Strong proficiency in data warehousing concepts and methodologies. Expertise in Microsoft Power BI. Experience with Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Knowledge of SQL and scripting languages (Python, PowerShell). Strong understanding of data modeling and ETL/ELT processes. Secondary Skills Soft Skills Excellent communication and interpersonal skills. Strong analytical and problem-solving abilities. Ability to work independently and as part of a team. Strong attention to detail and organizational skills.

Posted 1 month ago

Apply

3.0 - 6.0 years

6 - 10 Lacs

Bengaluru

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 month ago

Apply

7.0 - 11.0 years

7 - 11 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. Your Role We are looking for a skilled PySpark Developer with experience in Azure Databricks (ADB) and Azure Data Factory (ADF) to join our team. The ideal candidate will play a crucial role in designing, developing, and implementing data solutions using PySpark for large-scale data processing and analytics. Your Profile Design, develop, and deploy PySpark applications and workflows on Azure Databricks for data transformation, cleansing, and aggregation. Implement data pipelines using Azure Data Factory (ADF) to orchestrate ETL/ELT processes across heterogeneous data sources. Conduct regular financial risk assessments to identify potential vulnerabilities in data processing workflows. Collaborate with Data Engineers and Data Scientists to integrate and process structured and unstructured data sets into actionable insights.

Posted 1 month ago

Apply

5.0 - 8.0 years

10 - 14 Lacs

Bengaluru

Work from Office

Your role Knowledge in Cloud Computing by using Spark, Azure Databricks, Azure Data Factory Knowledge in programming language Python/Scala. Knowledge in Spark/PySpark (Core and Streaming) and hands-on to transform using Streaming. Knowledge building real time or batch ingestion and transformation pipelines. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders. Your Profile Working experience and strong knowledge in Databricks is a plus. Analyse existing queries for performance improvements. Develop procedures and scripts for data migration. Provide timely scheduled management reporting. Investigate exceptions regarding asset movements.

Posted 1 month ago

Apply

7.0 - 11.0 years

5 - 10 Lacs

Bengaluru

Work from Office

Skill required: Tech for Operations - Automation Anywhere Designation: App Automation Eng Specialist Qualifications: Any Graduation Years of Experience: 7 to 11 years About Accenture Combining unmatched experience and specialized skills across more than 40 industries, we offer Strategy and Consulting, Technology and Operations services, and Accenture Song all powered by the worlds largest network of Advanced Technology and Intelligent Operations centers. Our 699,000 people deliver on the promise of technology and human ingenuity every day, serving clients in more than 120 countries. Visit us at www.accenture.com What would you do RPA Lead developer will be responsible for design & development of end-to-end RPA automation leveraging A360 tools & technologies. Should anticipate, identify, track, and resolve technical issues and risks affecting delivery.Understand the Automation Anywhere RPA platform, its features, capabilities, and best practices. You would need to be proficient in designing and implementing automation workflows that optimize business processes. What are we looking for Minimum 6 8 years of strong software design & development experience Minimum 5 6 year(s) of programming experience in Automation Anywhere A360 , Document Automation, Co-pilot, Python.Effective GEN AI Prompts creation for Data extraction using GEN AI OCRExperience with APIs, data integration, and automation best practicesExperience in VBA ,VB and Python Script programmingGood Knowledge on GEN AI , Machine Learning.Should have good hands-on in core .NET concepts and OOPs Programming. Understands OO concepts and consistently applies them in client engagements. Hands on experience in SQL & T-SQL Queries, Creating complex stored procedures.Exceptional presentation, written and verbal communication skills (English)Good understanding of workflow-based logic and hands on experience using process templates, VBO design and build.Should understand process analysis and pipeline build for automation process.Automation Anywhere A360 Master/Advanced certification.Strong programming knowledge on HTML, JavaScript / VB scriptsExperience with Agile development methodology.Exposure to SAP automation is preferred. Exposure to A360 Control Room features.Azure Machine Learning, Azure Databricks, and other Azure AI services.Exposure to GDPR compliance is preferred.Agile development methodologies are an added advantage. Roles and Responsibilities: Lead the team to develop automation bots and processes using A360 platform. Utilize A360 s advanced features (AARI, WLM and API Consumption, Document automation,Co-pilot) to automate complex tasks, streamline processes, and optimize efficiency.Integrate A360 with various APIs, databases, and third-party tools to ensure seamless data flow and interaction between systems.Should be able to identify and build the common components to be used across the projects.Collaborate with cross-functional teams including business analysts, Process Architects to deliver holistic automation solutions that cater to various stakeholder needs.Strong SQL database management and troubleshooting skills.Serve as a technical expert on development projects.Review code for compliance and reuse. Ensure code complies with RPA architectural industry standards.Lead problem identification/error resolution process, including tracking, repairing, and reporting defects.Creates and maintains documentation to support role responsibilities for training, cross-training, and disaster recovery.Monitor and maintain license utilization and subscriptions.Maintain / monitor RPA environments (Dev/Test/Prod)Review and ensure automation runbooks are complete and maintained.Design, develop, document, test, and debug new robotic process automation (RPA) applications for internal use. Qualification Any Graduation

Posted 1 month ago

Apply

6.0 - 8.0 years

0 Lacs

Hyderabad

Work from Office

We are looking for a skilled and analytical Power BI Consultant to join our team. The ideal candidate will have strong experience in business intelligence and data visualization using Power BI. You will be responsible for designing, developing, and deploying BI solutions that provide actionable insights and help drive strategic decisions across the organization. Key Responsibilities: Work closely with business stakeholders to gather, understand, and document reporting requirements. Design, develop, and deploy Power BI dashboards and reports tailored to business needs. Transform raw data into meaningful and interactive visualizations. Develop and maintain datasets, data models, and data pipelines. Optimize data models and DAX queries for performance and usability. Perform data analysis and data validation to ensure data accuracy and integrity. Collaborate with data engineers, analysts, and IT teams to ensure seamless data integration. Provide user training and support for Power BI tools and reports. Monitor and troubleshoot Power BI reports and dashboards to ensure smooth operation. Recommend enhancements and best practices for Power BI architecture and governance REQUIRED SKILLS : Bachelor's degree in Computer Science, Information Technology, Data Analytics, or a related field. Proven experience (typically 3+ years) as a Power BI Developer/Consultant. Strong proficiency in Power BI, including Power Query (M), DAX, and data modeling. Experience with SQL and relational databases (e.g., SQL Server, Azure SQL, PostgreSQL). Familiarity with ETL processes and tools. Good understanding of data warehousing and reporting concepts. Excellent analytical and problem-solving skills. Strong communication and stakeholder management skills. Experience with Power BI Service (publishing, workspaces, sharing, security). Knowledge of Power Platform (Power Apps, Power Automate) is a plus. Experience with cloud data platforms like Azure Data Lake, Synapse, or AWS is a plus.

Posted 1 month ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Coimbatore

Work from Office

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Microsoft Azure Databricks Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : Any Btech Degree Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for creating efficient and scalable applications using Microsoft Azure Databricks. Your typical day will involve collaborating with the team to understand business requirements, designing and developing applications, and ensuring the applications meet quality standards and performance expectations. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Collaborate with the team to understand business requirements and translate them into technical specifications.- Design, develop, and test applications using Microsoft Azure Databricks.- Ensure the applications meet quality standards and performance expectations.- Troubleshoot and debug applications to identify and resolve issues.- Provide technical guidance and support to junior developers.- Stay updated with the latest industry trends and technologies related to application development. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks.- Strong understanding of statistical analysis and machine learning algorithms.- Experience with data visualization tools such as Tableau or Power BI.- Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms.- Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information:- The candidate should have a minimum of 3 years of experience in Microsoft Azure Databricks.- This position is based at our Hyderabad office.- A Any Btech Degree is required. Qualification Any Btech Degree

Posted 1 month ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai, New Delhi, Bengaluru

Work from Office

Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) What do you need for this opportunity? Must have skills required: Data Governance, Lakehouse architecture, Medallion Architecture, Azure DataBricks, Azure Synapse, Data Lake Storage, Azure Data Factory Intelebee LLC is Looking for: Data Engineer:We are seeking a skilled and hands-on Cloud Data Engineer with 5-8 years of experience to drive end-to-end data engineering solutions. The ideal candidate will have a deep understanding of dimensional modeling, data warehousing (DW), Lakehouse architecture, and the Medallion architecture. This role will focus on leveraging Azure's/AWS ecosystem to build scalable, efficient, and secure data solutions. You will work closely with customers to understand requirements, create technical specifications, and deliver solutions that scale across both on-premise and cloud environments. Key Responsibilities: End-to-End Data Engineering Lead the design and development of data pipelines for large-scale data processing, utilizing Azure/AWS tools such as Azure Data Factory, Azure Synapse, Azure functions, Logic Apps , Azure Databricks, and Data Lake Storage. Tools, AWS Lambda, AWS Glue Develop and implement dimensional modeling techniques and data warehousing solutions for effective data analysis and reporting. Build and maintain Lakehouse and Medallion architecture solutions for streamlined, high-performance data processing. Implement and manage Data Lakes on Azure/AWS, ensuring that data storage and processing is both scalable and secure. Handle large-scale databases (both on-prem and cloud) ensuring high availability, security, and performance. Design and enforce data governance policies for data security, privacy, and compliance within the Azure ecosystem.

Posted 1 month ago

Apply

4.0 - 6.0 years

20 - 35 Lacs

Noida, Hyderabad, Bengaluru

Hybrid

Hi All, Greetings for the day!! We are currently hiring for Data Engineer (Python, Pyspark, and Azure Databricks) for Emids(MNC) at Bangalore location. Role: Data Engineer Exp: 5 to 8 Years Location: Bangalore, Noida, and Hyderabad (Hybrid, weekly 2 Days office must) NP: Immediate to 15 Days (Try to find only immediate joiners) Note: Candidate Must have experience in Python, Kafka Stream, Pyspark, and Azure Databricks . Role Overview: We are looking for a highly skilled with expertise in Kafka, Python, and Azure Databricks (preferred) to drive our healthcare data engineering projects. The ideal candidate will have deep experience in real-time data streaming, cloud-based data platforms, and large-scale data processing . This role requires strong technical leadership, problem-solving abilities, and the ability to collaborate with cross-functional teams. Key Responsibilities: Lead the design, development, and implementation of real-time data pipelines using Kafka, Python, and Azure Databricks . Architect scalable data streaming and processing solutions to support healthcare data workflows. Develop, optimize, and maintain ETL/ELT pipelines for structured and unstructured healthcare data. Ensure data integrity, security, and compliance with healthcare regulations (HIPAA, HITRUST, etc.). Collaborate with data engineers, analysts, and business stakeholders to understand requirements and translate them into technical solutions. Troubleshoot and optimize Kafka streaming applications, Python scripts, and Databricks workflows . Mentor junior engineers, conduct code reviews, and ensure best practices in data engineering . Stay updated with the latest cloud technologies, big data frameworks, and industry trends . Required Skills & Qualifications: 4+ years of experience in data engineering, with strong proficiency in Kafka and Python . Expertise in Kafka Streams, Kafka Connect, and Schema Registry for real-time data processing. Experience with Azure Databricks (or willingness to learn and adopt it quickly). Hands-on experience with cloud platforms (Azure preferred, AWS or GCP is a plus) . Proficiency in SQL, NoSQL databases, and data modeling for big data processing. Knowledge of containerization (Docker, Kubernetes) and CI/CD pipelines for data applications. Experience working with healthcare data (EHR, claims, HL7, FHIR, etc.) is a plus. Strong analytical skills, problem-solving mindset, and ability to lead complex data projects. Excellent communication and stakeholder management skills. Note: This is not a contract position, this will be a permanent position with Emids. Interested candidates Can Share Your Updated Profile with details for below Email. NAME: CCTC: ECTC: Notice Period: Offers in Hand : Email ID: Ravi.chekka@emids.com

Posted 1 month ago

Apply

7.0 - 10.0 years

5 - 10 Lacs

Hyderabad, Bengaluru

Work from Office

Role & responsibilities Job Description: Responsibilities: Design, build, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Delta Lake Lead the migration of legacy Hive-metastore-based Databricks workspaces to Unity Catalog Implement data governance and security models using Unity Catalog (external locations, catalogs, schemas, access controls, and tags) Develop and manage ETL/ELT pipelines for structured and semi-structured data across Azure Data Lake Storage (ADLS) Collaborate with data architects, analysts, and stakeholders to deliver clean, accessible, and governed datasets Optimize Databricks jobs for performance and cost-efficiency, leveraging cluster configurations and job orchestration Apply metadata management, tagging, and data lineage practices for cataloged assets Monitor and troubleshoot data pipelines, ensuring data quality, consistency, and observability Enforce DevOps best practices using CI/CD pipelines with Git, Databricks Repos, and Infrastructure as Code (IaC). Required Skills: 7+ years of experience in data engineering on the Azure ecosystem Expert-level skills in Azure Databricks, Delta Lake, and PySpark Strong understanding of Unity Catalog architecture, permissions model, and table/volume management Experience migrating data platforms to Unity Catalog (including table remapping, external locations, access control setup) Proficient in Azure Data Factory, Azure Data Lake Storage (Gen2), and Azure SQL / Synapse Hands-on with Databricks REST APIs, orchestration workflows, and cluster management Familiarity with data governance, RBAC, ABAC, and sensitivity labeling Solid understanding of version control (Git), DevOps tools, and CI/CD pipelines

Posted 1 month ago

Apply

5.0 - 10.0 years

22 - 27 Lacs

Hyderabad, Bengaluru, Delhi / NCR

Hybrid

Must have skillsets Required Skills: • 5 years of experience in ML Ops, DevOps, or Data Engineering roles. • Hands-on experience with Azure services, including: o Azure Machine Learning o Azure DevOps (CI/CD) o Azure Kubernetes Service (AKS) o Azure Data Lake / Blob Storage o Azure Functions / Logic Apps • Proficiency in Python and experience with ML frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). • Familiarity with containerization (Docker) and orchestration (Kubernetes). • Experience with ML model monitoring and logging tools. • Knowledge of ML model versioning and experiment tracking (e.g., MLflow, DVC). • Strong understanding of software development best practices and agile methodologies. • Familiarity with Terraform or Bicep for Azure infrastructure provisioning. • Exposure to Responsible AI and governance frameworks on Azure. • Experience working in regulated industries (healthcare, Pharma & Life Sciences) is a plus. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

10.0 - 15.0 years

45 - 50 Lacs

Hyderabad

Hybrid

Work shift Timings 11 AM to 8 PM Notes : Overall 10-14 yrs. of exp + Full time Undergraduation Must have Strong experience in Data Science for atleast 6+ yrs. Hands on experience in Machine learning & Deep learning Should have worked in LLM, RAG, SFT, CPT Models Should have exp in Cloud based tools for Machine learning Operations (Deployment) Azure Databricks, AWS Sagemaker Should have exp in Frameworks like Flask, Fast API Team management is compulsory Job Title: Manager - Government and Public Services Enabling Areas (GPS EA). The Team: GPS GSi The Role: Senior Data Scientist The Team: Do you have a strong background in machine learning and deep learning? Are you interested in utilizing your data science skills and collaborating with a small team in a fast-paced environment to achieve strategic mission goals? If so, Deloitte has an exciting opportunity for you! As a member of our GPS GSi group, you will play a crucial role in the development and maintenance of our data science and business intelligence solutions. This role will specialize in assisting with machine learning, deep learning, and generative AI initiatives that will be utilized by Enabling Area professionals to enhance and expedite decision-making. You will provide expertise within and across business teams, demonstrate the ability to work independently / as a team, and apply problem-solving skills to resolve complex issues. Work you will do. Technology: • Deliver exceptional client service. Maximizes results and drives high performance from people while fostering collaboration across businesses and geographies • Interfacing with business customers and leadership to gather requirements and deliver complete Data Engineering, Data Warehousing, and BI solutions. • Design, train, and deploy machine learning and deep learning models to AWS, Databricks, and Dataiku platforms. • Develop, design, and/or advise on Large Language Model (LLM) solutions for enterprise-wide documentation (e.g., Retrieval-Augmented Generation (RAG), Continued Pre-training (CPT), Supervised Fine-tuning (SFT), etc.) • Utilize Machine Learning Operations (MLOps) pipelines, including knowledge of containerization (Docker) and CI/CD for training and deploying models. • Maintain structured documentation of project development stages, including the utilization of GitHub and/or Jira for version control and project management. • Demonstrate effective communication skills with the ability to provide expertise and break down complex analytical solutions to explain to clients. • Remain current with latest industry trends and developments in data science and/or related fields, with the ability to learn new skills and knowledge to advance the skillset of our Data Science team. • Apply thorough attention to detail, and carefully review data science solutions for accuracy and quality. Leadership: • Develop high-performing teams by providing challenging and meaningful opportunities, and acknowledge their contributions to the organization's success. • Establish the team's strategy and roadmap, prioritizing initiatives based on their broader business impact. • Demonstrate leadership in guiding both US and USI teams to deliver advanced technical solutions across the GPS practice. • Serve as a role model for junior practitioners, inspiring action and fostering positive behaviors. • Pursue new and challenging initiatives that have a positive impact on our Practice and our personnel. • Establish a reputation as a Deloitte expert and be acknowledged as a role model and senior member by client teams. • Support and participate in the recognition and reward of junior team members. People Development: • Actively seek, provide, and respond to constructive feedback. • Offer development guidance to the GSi team, enhancing their people, leadership, and client management skills. • Play a pivotal role in recruitment and the onboarding of new hires. • Engage in formal performance assessment activities for assigned staff and collaborate with Practice leadership to address and resolve performance issues. • Serve as an effective coach by helping counselees identify their strengths and opportunities to capitalize on them. • Foster a "One Team" mindset among US and USI team members. Qualifications: Required/Preferred: • Bachelor's degree, preferably in Management Information Systems, Computer Science, Software Engineering, or related IT discipline • Minimum of 10+ years of relevant experience with data science technologies and analytics advisory or consulting firms. • Strong knowledge of LLMs and RAG. • Familiarity with AWS, Databricks, and/or Dataiku platforms. • Working knowledge of MLOps, including familiarity with containerization (e.g., Docker). • Excellent troubleshooting skills and the ability to work independently. • Strong organizational skills, including clear documentation of projects and ability to write clean code. • Familiarity with agile project methodology and/or project development lifecycle. • Experience with GitHub for version control. • Excellent communication and presentation skills, with the ability to explain complex data science concepts to non-technical audiences. • Ability to complete work in an acceptable timeframe and manage a variety of detailed tasks and responsibilities simultaneously and with accuracy to meet deadlines, goals, and objectives and satisfy internal and external customer needs related to the job. • Extensive experience with MLOps and associated serving frameworks (i.e., Flask, FastAPI, etc.) and orchestration pipelines (e.g., Sage Maker Pipelines, Step Functions, Metaflow, etc.). • Extensive experience working with open source LLMs (e.g., serving via TGI / vLLM, performing CPT and/or SFT, etc.). • Experience using various AWS Services (e.g., Textract, Transcribe, Lambda, etc.). • Proficiency in basic front-end web development (e.g., Streamlit). • Knowledge of Object-Oriented Programming (OOP) concepts. • At least 3-4 years of people management skills are required. • Work Location: Hyderabad Timings: 2 PM 11PM How you’ll grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities— including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in exactly the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, worldclass learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with our clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world.

Posted 1 month ago

Apply

10.0 - 16.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Job Title: Lead Data Engineer Location: Gurgaon/Pune/Hyderabad/Bengaluru/Chennai Work Mode: Hybrid (2-3 days office in a week) Job Description: 6+ years of hands-on experience in data processing focused projects Proficiency with Java, Python, or Scala, and SQL Knowledge of Apache Spark Experience with one of the major cloud providers AWS, Azure or GCP Hands-on experience with selected data processing technologies, such as Hadoop, MongoDB, Cassandra, Kafka, and Elasticsearch, as well as Python libraries (Pandas, NumPy, etc.) and data processing tools from cloud providers (EMR, Glue, Data Factory, BigTable, etc.) Relevant experience with version control systems and code review processes Knowledge of Agile methodologies Basic knowledge of Linux and Bash scripting Nice to have Hands-on experience with Databricks and Delta-Lake Ability to build Apache Airflow pipelines Experience with the Snowflake platform

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies