Jobs
Interviews

1265 Azure Databricks Jobs - Page 46

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

4.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

About The Role : - Minimum 4 years of experience in relevant field. - Hands on experience in Databricks, SQL, Azure Data Factory, Azure DevOps - Strong expertise in Microsoft Azure cloud platform services (Azure Data Factory, Azure Data Bricks, Azure SQL Database, Azure Data Lake Storage, Azure Synapse Analytics). - Proficient in CI-CD pipelines in Azure DevOps for automatic deployments - Good in Performance optimization techniques like using temp tables, CTE, indexing, merge statements, joins. - Familiarity in Advanced SQL and programming skills (e.g., Python, Pyspark). - Familiarity with data warehousing and data modelling concepts. - Good in Data management and deployment processes using Azure Data factory and Databricks, Azure DevOps. - Knowledge on integrating every azure service with DevOps - Experience in designing and implementing scalable data architectures. - Proficient in ETL processes and tools. - Strong communication and collaboration skills. - Certifications in relevant Azure technologies are a plus Location Bangalore/ Hyderabad Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

5.0 - 6.0 years

7 - 12 Lacs

Hyderabad

Work from Office

About the Role - We are seeking a highly skilled and experienced Senior Azure Databricks Engineer to join our dynamic data engineering team. - As a Senior Azure Databricks Engineer, you will play a critical role in designing, developing, and implementing data solutions on the Azure Databricks platform. - You will be responsible for building and maintaining high-performance data pipelines, transforming raw data into valuable insights, and ensuring data quality and reliability. Key Responsibilities - Design, develop, and implement data pipelines and ETL/ELT processes using Azure Databricks. - Develop and optimize Spark applications using Scala or Python for data ingestion, transformation, and analysis. - Leverage Delta Lake for data versioning, ACID transactions, and data sharing. - Utilize Delta Live Tables for building robust and reliable data pipelines. - Design and implement data models for data warehousing and data lakes. - Optimize data structures and schemas for performance and query efficiency. - Ensure data quality and integrity throughout the data lifecycle. - Integrate Azure Databricks with other Azure services (e.g., Azure Data Factory, Azure Synapse Analytics, Azure Blob Storage). - Leverage cloud-based data services to enhance data processing and analysis capabilities. Performance Optimization & Troubleshooting - Monitor and analyze data pipeline performance. - Identify and troubleshoot performance bottlenecks. - Optimize data processing jobs for speed and efficiency. - Collaborate effectively with data engineers, data scientists, data analysts, and other stakeholders. - Communicate technical information clearly and concisely. - Participate in code reviews and contribute to the improvement of development processes. Qualifications Essential - 5+ years of experience in data engineering, with at least 2 years of hands-on experience with Azure Databricks. - Strong proficiency in Python and SQL. - Expertise in Apache Spark and its core concepts (RDDs, DataFrames, Datasets). - In-depth knowledge of Delta Lake and its features (e.g., ACID transactions, time travel). - Experience with data warehousing concepts and ETL/ELT processes. - Strong analytical and problem-solving skills. - Excellent communication and interpersonal skills. - Bachelor's degree in Computer Science, Computer Engineering, or a related field. Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 2 months ago

Apply

10.0 - 15.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Required Qualifications: 10+ years of experience in data engineering, with expertise in Azure Databricks, Azure Data Lakehouse, and Delta Lake. Hands-on experience implementing Medallion Architecture, building and managing Bronze, Silver, and Gold data layers. Strong experience with Azure Data Lake Storage (ADLS) and integration with Delta Lake for efficient data storage and querying. Customer Data Platform (CDP) experience, with the ability to design and integrate customer-centric data architectures that drive personalized analytics. Experience in data governance, security, and compliance using Azure tools like Azure Purview, Data Catalog, and Key Vault. Proven experience in ETL/ELT development and real-time data ingestion pipelines, working with large-scale datasets. Strong knowledge of SQL, Python, and Spark for data processing, analysis, and pipeline development. Familiarity with data modeling techniques, big data technologies, and data warehousing in a cloud environment. Excellent communication and collaboration skills, with the ability to work cross-functionally and lead data-driven initiatives. Preferred Qualifications: Experience with machine learning and advanced analytics workflows using Azure Databricks. Familiarity with API integration and real-time data streaming solutions like Azure Event Hubs or Azure Stream Analytics. Knowledge of DevOps for data, including automated CI/CD pipelines for data deployments.

Posted 2 months ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Noida, Hyderabad

Work from Office

Azure data factory, Azure Databricks, SQL, Pyspark, Python, Synapse

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Kolkata, Mumbai, New Delhi

Work from Office

Category: Technology Location: Shuru is a self-managed technology team specializing in accelerating visions through product, technology, and AI leadership With a focus on bespoke execution, we deliver impactful solutions that are scalable and designed for success At Shuru, we deliver mobile solutions that meet and exceed customer expectations Our collaborative and fast-paced environment encourages creativity and innovation, Our Data Platform team is hiring a Senior Data Engineer to build and maintain scalable pipelines, shape our data architecture on Azure Databricks, and mentor other engineers, Responsibilities Work closely with source system teams and reporting teams to gather, analyze, and translate data requirements into scalable data pipelines, Design, develop, and maintain robust ETL/ELT pipelines using PySpark, SQL, and Delta Lake on Azure Databricks, Ingest and integrate data from multiple sources including MariaDB, Azure Event Hubs, APIs, and flat file systems, Build and optimize data models across bronze, silver, and gold layers using Delta tables, Ensure data reliability, accuracy, and freshness through monitoring, testing, and validation, Apply and enforce data governance, security, and compliance policies throughout the pipeline lifecycle, Collaborate with analysts, data scientists, and engineers to enable downstream data usage, Promote engineering excellence through code reviews, documentation, CI/CD, and automation, Mentor junior engineers and provide guidance on design and best practice Explore emerging technologies such as Structured Streaming, and support their adoption where beneficial, Requirements Requirements: 5+ years of experience in data engineering or similar roles, At least 1 year in a senior or technical lead capacity, Proven experience with Azure Databricks, Delta Lake, and Spark-based data pipelines, Experience integrating data from multiple sources such as MariaDB, Event Hubs, APIs, or file systems, Exposure to version control (e-g , Git), CI/CD pipelines, and Agile methodologies, Familiarity with Structured Streaming and real-time data processing is a plus, Experience in fintech, broking, or financial services domains is an advantage Benefits Benefits: Competitive salary and benefits package, Opportunity to work with a team of experienced product and tech leaders, A flexible work environment with remote working options, Continuous learning and development opportunities, Chance to make a significant impact on diverse and innovative projects Details

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Nashik

Work from Office

Dreaming big is in our DNA Its who we are as a company Its our culture Its our heritage And more than ever, its our future A future where were always looking forward Always serving up new ways to meet lifes moments A future where we keep dreaming bigger We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential The power we create together when we combine your strengths with ours is unstoppable Are you ready to join a team that dreams as big as you do AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology The teams are transforming Operations through Tech and Analytics, Do You Dream Big We Need You, Job Description Job Title: Azure Data Engineer Location: Bengaluru Reporting to: Senior Manager Data Engineering Purpose of the role We are seeking a skilled and motivated Azure Data Engineer to join our dynamic team The ideal candidate will have hands-on experience with Microsoft Azure cloud services, data engineering, and a strong background in designing and implementing scalable data solutions, Key tasks & accountabilities Design, develop, and maintain scalable data pipelines and workflows using Azure Data Factory, Azure Databricks, and other relevant tools, Implement and optimize data storage solutions in Azure, including Azure PostgreSQL Database, Azure Blob Storage, Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and implement solutions that align with business objectives, Ensure data quality, integrity, and security in all data-related processes and implementations, Work with both structured and unstructured data and implement data transformation and cleansing processes, Optimize and fine-tune performance of data solutions to meet both real-time and batch processing requirements, Troubleshoot and resolve issues related to data pipelines, ensuring minimal downtime and optimal performance, Stay current with industry trends and best practices, and proactively recommend improvements to existing data infrastructure, Qualifications, Experience, Skills Bachelor's degree in Computer Science, Information Technology, or a related field, Proven experience as a Data Engineer with a focus on Microsoft Azure technologies, Hands-on experience with Azure services such as Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage, and Azure Synapse Analytics, Strong proficiency in SQL and experience with data modeling and ETL processes, Familiarity with data integration and orchestration tools, Knowledge of data warehousing concepts and best practices, Experience with version control systems, preferably Git, Excellent problem-solving and communication skills, Level Of Educational Attainment Required Tech Previous Work Experience 7+ Years of Experience Technical Expertise Proven experience in Azure Databricks and ADLS architecture and implementation, Strong knowledge of medallion architecture and data lake design, Expertise in SQL, Python, and Spark for building and optimizing data pipelines, Familiarity with data integration tools and techniques, including Azure-native solutions, And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Bengaluru

Work from Office

AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology The teams are transforming Operations through Tech and Analytics, Do You Dream Big We Need You, Job Description Job Title: Senior Data Engineer Location: Bengaluru Reporting to: Senior Manager Data Engineering Purpose of the role We are seeking a skilled and motivated Azure Data Engineer to join our dynamic team The ideal candidate will have hands-on experience with Microsoft Azure cloud services, data engineering, and a strong background in designing and implementing scalable data solutions, Key tasks & accountabilities Design, develop, and maintain scalable data pipelines and workflows using Azure Data Factory, Azure Databricks, and other relevant tools, Implement and optimize data storage solutions in Azure, including Azure PostgreSQL Database, Azure Blob Storage, Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and implement solutions that align with business objectives, Ensure data quality, integrity, and security in all data-related processes and implementations, Work with both structured and unstructured data and implement data transformation and cleansing processes, Optimize and fine-tune performance of data solutions to meet both real-time and batch processing requirements, Troubleshoot and resolve issues related to data pipelines, ensuring minimal downtime and optimal performance, Stay current with industry trends and best practices, and proactively recommend improvements to existing data infrastructure, Qualifications, Experience, Skills Bachelor's degree in Computer Science, Information Technology, or a related field, Proven experience of atleast 7+ years as a Data Engineer with a focus on Microsoft Azure technologies, Hands-on experience with Azure services such as Azure Data Factory, Azure Databricks, Azure SQL Database, Azure Data Lake Storage, and Azure Synapse Analytics, Strong proficiency in SQL and experience with data modeling and ETL processes, Familiarity with data integration and orchestration tools, Knowledge of data warehousing concepts and best practices, Experience with version control systems, preferably Git, Excellent problem-solving and communication skills, Technical Expertise Proven experience in Azure Databricks and ADLS architecture and implementation, Strong knowledge of medallion architecture and data lake design, Expertise in SQL, Python, and Spark for building and optimizing data pipelines, Familiarity with data integration tools and techniques, including Azure-native solutions, And above all of this, an undying love for beer! We dream big to create future with more cheers

Posted 2 months ago

Apply

8.0 - 13.0 years

25 - 30 Lacs

Nashik

Work from Office

Dreaming big is in our DNA Its who we are as a company Its our culture Its our heritage And more than ever, its our future A future where were always looking forward Always serving up new ways to meet lifes moments A future where we keep dreaming bigger We look for people with passion, talent, and curiosity, and provide them with the teammates, resources and opportunities to unleash their full potential The power we create together when we combine your strengths with ours is unstoppable Are you ready to join a team that dreams as big as you do AB InBev GCC was incorporated in 2014 as a strategic partner for Anheuser-Busch InBev The center leverages the power of data and analytics to drive growth for critical business functions such as operations, finance, people, and technology The teams are transforming Operations through Tech and Analytics, Do You Dream Big We Need You, Job Description Job Title: Azure Data Engineer Location: Bengaluru Reporting to: Senior Manager Data Engineering Purpose of the role We are seeking an experienced Data Engineer with over 4 years of expertise in data engineering and a focus on leveraging GenAI solutions The ideal candidate will have a strong background in Azure services, relational databases, and programming languages, including Python and PySpark You will play a pivotal role in designing, building, and optimizing scalable data pipelines while integrating AI-driven solutions to enhance our data capabilities, Key tasks & accountabilities Data Pipeline Development: Design and implement efficient ETL/ELT pipelines using Azure Data Factory (ADF) and Azure Databricks (ADB), Ensure high performance and scalability of data pipelines, Relational Database Management: Work with relational databases to structure and query data efficiently, Design, optimize, and maintain database schemas, Programming and Scripting: Write, debug, and optimize Python, PySpark, and SQL code to process large datasets, Develop reusable code components and libraries for data processing, Data Quality and Governance: Implement data validation, cleansing, and monitoring mechanisms, Ensure compliance with data governance policies and best practices, Performance Optimization: Identify and resolve bottlenecks in data processing and storage, Optimize resource utilization on Azure services, Collaboration and Communication: Work closely with cross-functional teams, including AI, analytics, and product teams, Document processes, solutions, and best practices for future use, Qualifications, Experience, Skills Previous Work Experience 4+ years of experience in data engineering, Proficiency in Azure Data Factory (ADF) and Azure Databricks (ADB), Expertise in relational databases and advanced SQL, Strong programming skills in Python and PySpark, Experience with GenAI solutions is a plus, Familiarity with data governance and best practices, Level Of Educational Attainment Required Bachelor's degree in Computer Science, Information Technology, or a related field, Technical Expertise: Knowledge of machine learning pipelines and GenAI workflows, Experience with Azure Synapse or other cloud data platforms, Familiarity with CI/CD pipelines for data workflows, And above all of this, an undying love for beer! We dream big to create future with more cheers,

Posted 2 months ago

Apply

6.0 - 11.0 years

22 - 35 Lacs

Chennai

Hybrid

Job Location: Chennai Notice Period: Immediate - 30 Days MAX Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology

Posted 2 months ago

Apply

5.0 - 10.0 years

5 - 15 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role: Azure Data Engineer Skill: Azure Data Factory, Azure Data Lake Storage, Azure Databricks, PySpark, SQL Notice Period - Immediate to 30 Days Role & responsibilities: Knowledge to ingest, cleanse, transform and load data from varied data sources in the above Azure Services Strong knowledge of Medallion architecture Consume data from source with different file format such as XML, CSV, Excel, Parquet, JSON Create Linked Services with different type of sources. Create automated flow for pipeline which can consume data i.e may receive file via email or Share point. Strong problem-solving skill such as backtracking of dataset, data analysis etc. Strong Knowledge of in advanced SQL techniques for carrying out data analysis as per client requirement. Preferred Skills: The candidates need to understand different data architecture patterns and parallel data processing. S/he should be proficient in using the following services to create data processing solutions: Azure Data Factory Azure Data Lake Storage Azure Databricks Strong Knowledge in PySpark SQL Good programming skill in Python Desired Skills: Ability to query the data from serverless SQL Pool in Azure Synapse Analytics. Knowledge of Azure DevOps. Knowledge to configure any dataset with Vnet, Subnet Networks Knowledge of Microsoft Entra ID, to create App registration for single and multitenant for security purpose.

Posted 2 months ago

Apply

10.0 - 15.0 years

30 - 45 Lacs

Pune

Work from Office

Azure Cloud Data Solutions Architect Job Title: Azure Cloud Data Solutions Architect Location: Pune, India Experience: 10 - 15 Years Work Mode: Full-time, Office-based Company : Smartavya Analytica Private Limited Company Overview: Smartavya Analytica is a niche Data and AI company based in Mumbai, established in 2017. We specialize in data-driven innovation, transforming enterprise data into strategic insights. With expertise spanning over 25+ Data Modernization projects and handling large datasets up to 24 PB in a single implementation, we have successfully delivered data and AI projects across multiple industries, including retail, finance, telecom, manufacturing, insurance, and capital markets. We are specialists in Cloud, Hadoop, Big Data, AI, and Analytics, with a strong focus on Data Modernization for On-premises, Private, and Public Cloud Platforms. Visit us at: https://smart-analytica.com Job Summary: We are seeking an experienced Azure Cloud Data Solutions Architect to lead end-to-end architecture and delivery of enterprise-scale cloud data platforms. The ideal candidate will have deep expertise in Azure Data Services , Data Engineering , and Data Governance , with the ability to architect and guide cloud modernization initiatives. Key Responsibilities: Architect and design data lakehouses , data warehouses , and analytics platforms using Azure Data Services . Lead implementations using Azure Data Factory (ADF) , Azure Synapse Analytics , and Azure Fabric (OneLake ecosystem). Define and implement data governance frameworks including cataloguing, lineage, security, and quality controls. Collaborate with business stakeholders, data engineers, and developers to translate business requirements into scalable Azure architectures. Ensure platform design meets performance, scalability, security, and regulatory compliance needs. Guide migration of on-premises data platforms to Azure Cloud environments. Create architectural artifacts: solution blueprints, reference architectures, governance models, and best practice guidelines. Collaborate with Sales / presales to customer meetings to understand the business requirement, the scope of work and propose relevant solutions. Drive the MVP/PoC and capability demos to prospective customers / opportunities Must-Have Skills: 1015 years of experience in data architecture, data engineering, or analytics solutions. Hands-on expertise in Azure Cloud services: ADF , Synapse , Azure Fabric (OneLake) , and Databricks (good to have). Strong understanding of data governance , metadata management, and compliance frameworks (e.g., GDPR, HIPAA). Deep knowledge of relational and non-relational databases (SQL, NoSQL) on Azure. Experience with security practices (IAM, RBAC, encryption, data masking) in cloud environments. Strong client-facing skills with the ability to present complex solutions clearly. Preferred Certifications: Microsoft Certified: Azure Solutions Architect Expert Microsoft Certified: Azure Data Engineer Associate

Posted 2 months ago

Apply

6.0 - 9.0 years

27 - 42 Lacs

Chennai

Work from Office

Role : MLOps Engineer Location - Chennai - CKC Mode of Interview - In Person Data - 7th June 2025 (Saturday) Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 2 months ago

Apply

6.0 - 11.0 years

19 - 27 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

We are looking for "Azure Data bricks Engineer" with Minimum 6 years experience Contact- Atchaya (95001 64554) Required Candidate profile Exp in Azure Data bricks and Python Must Have Data bricks Python Azure The Candidate must have 7-10 yrs of experience in data bricks, delta lake Hands-on exp on Azure Exp on Python scripting

Posted 2 months ago

Apply

3.0 - 5.0 years

10 - 15 Lacs

Bengaluru

Work from Office

Interested candidates can share your resume to aweaz.pasha@wisseninfotech.com DATA ENGINEER 3 to 5 years of experience in building, Data pipelines using Data bricks. Hands on experience in PySpark, Spark SQL, spark structured streaming for processing diverse datasets. Hands on in Python, SQL. Working knowledge of Data bricks workflows and scheduling concepts. Working knowledge of Git code repo, Git Branching strategy and CICD. Good understanding of Apache Spark and Delta lake core concepts. Working knowledge of public cloud platforms, Azure preferred. Good debugging and problem solving skills with good interpersonal and communication skill.

Posted 2 months ago

Apply

5.0 - 10.0 years

12 - 18 Lacs

Chennai

Work from Office

Looking for a Sr. Data Engineer for a top retail analytics client in Germany! Design & optimize data pipelines using Azure Data Factory & DBT. Manage PostgreSQL, Azure Databricks, PySpark, Azure Storage, Logic Apps & CI/CD. Collaborate with DevOps. Required Candidate profile 5-8 years Data engineering experience with expertise in SQL/PostgreSQL Expertise in Azure Data Factory, Azure Databricks & PySpark Experience in DevOps and any cloud services are added advantage

Posted 2 months ago

Apply

7.0 - 12.0 years

27 - 42 Lacs

Chennai

Work from Office

Azure Databricks/Datafactory Working with event based / streaming technologies to ingest and process data Working with other members of the project team to support delivery of additional project components (API interfaces, Search). Evaluating the performance and applicability of multiple tools against customer requirements Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Strong knowledge of Data Management principles Experience in building ETL / data warehouse transformation processes Direct experience of building data piplines using Databricks. Experience using geospatial frameworks on Apache Spark and associated design and development patterns Experience working in a Dev/Ops environment with tools such as Terraform

Posted 2 months ago

Apply

6.0 - 9.0 years

10 - 16 Lacs

Pune

Work from Office

Lead end-to-end delivery of Azure-based data projects, ensuring timely execution and quality outcomes using Azure Data Factory, Databricks, and PySpark. Manage and mentor a team, providing technical guidance and ensuring smooth production support. Required Candidate profile Skills:Azure Data Factory, Databricks, Data warehouse, Production support, and PySpark Strong Delivery exp + Team leading exp(minimum 2 years)+Someone who has led an Azure project for at least 2 years

Posted 2 months ago

Apply

5 - 10 years

18 - 30 Lacs

Noida

Hybrid

Job Title: AI Platform Lead Location: Noida | Start Date: ASAP | Type: Full-time We’re hiring an AI Platform Lead to manage and scale our AI platforms (Dataiku, Azure ML, Azure AI Services). The role requires 5+ years’ experience in AI/ML platform management, strong cloud and DevOps expertise, and leadership skills. Must-have: Experience with Azure, Dataiku, Agile/DevOps, and a Master’s or PhD in a relevant field.

Posted 2 months ago

Apply

8 - 12 years

18 - 25 Lacs

Pune

Work from Office

We are looking for an experienced TechLead with a deep understanding of the Microsoft Data Technology stack. The candidate should have 8-10 years of professional experience, proven leadership skills, and the ability to manage and mentor a team of 5 to 8 people. Preferred candidate profile Experience: 8-10 years in the Data and Analytics domain with expertise in the Microsoft Data Tech stack. Leadership: Experience in managing teams of 8-10 members. Technical Skills: Expertise in tools like Microsoft Fabric, Azure Synapse Analytics, Azure Data Factory, Power BI, SQL Server, Azure Databricks, etc. Strong understanding of data architecture, pipelines, and governance. Understanding of one of the other data platforms like Snowflake or Google Big query or Amazon Red shift will be a plus and good to have skill. Tech stack - DBT and Databricks or Snowflake Microsoft BI - PBI, Synapse and Fabric Project Management: Proficiency in project management methodologies (Agile, Scrum, or Waterfall). Communication: Excellent interpersonal, written, and verbal communication skills. Education: Bachelors or masters degree in computer science, Information Technology, or related field. Intersected candidates can forward your profile to karthik@busisol.net or whatsapp @ 9791876677

Posted 2 months ago

Apply

5 - 8 years

5 - 9 Lacs

Bengaluru

Work from Office

Wipro Limited (NYSEWIT, BSE507685, NSEWIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Azure Data Factory. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

5 - 7 years

8 - 14 Lacs

Hyderabad

Work from Office

Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 months ago

Apply

5 - 7 years

8 - 14 Lacs

Surat

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 months ago

Apply

1 - 3 years

8 - 13 Lacs

Bengaluru

Work from Office

? Azure Platform Engineer (Databricks) Platform Design Define best practices for end-to-end databricks platform. Work with databricks and internal teams on evaluation of new features of databricks (private preview/ public preview) Ongoing discussions with databricks product teams on product features Platform Infra Create new databricks workspaces (premium, standard, serverless) and clusters including right sizing Drop unused workspaces Delta SharingWork with enterprise teams on connected data (data sharing) User Management About The Role Create new security groups and add/delete users Assign Unity Catalog permissions to respective groups/teams Manage Quantum Collaboration platform – sandbox for enterprise teams for ideation and innovation. Troubleshooting Issues in Databricks Investigate and diagnose performance issues or errors within Databricks. Review and analyze Databricks logs and error messages. Identify and address problems related to cluster configuration or job failures. Optimize Databricks notebooks and jobs for performance. Coordinate with Databricks support for unresolved or complex issues. Document common troubleshooting steps and solutions. Develop and test Databricks clusters to ensure stability and scalability. Governance Create dashboards to monitor job performance, cluster utilization, and cost. Design dashboards to cater to various user roles (e.g., data scientists, admins). Use Databricks APIs or integration with monitoring tools for up-to-date metrics. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

5 - 8 years

9 - 14 Lacs

Pune

Work from Office

About The Role Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ? Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ? Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ? Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ? Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Azure Synapse Analytics. Experience5-8 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies