Jobs
Interviews

1265 Azure Databricks Jobs - Page 47

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8 - 10 years

14 - 18 Lacs

Hyderabad

Work from Office

About The Role Role Purpose The purpose of this role is to provide solutions and bridge the gap between technology and business know-how to deliver any client solution ? Do 1. Bridging the gap between project and support teams through techno-functional expertise For a new business implementation project, drive the end to end process from business requirement management to integration & configuration and production deployment Check the feasibility of the new change requirements and provide optimal solution to the client with clear timelines Provide techno-functional solution support for all the new business implementations while building the entire system from the scratch Support the solutioning team from architectural design, coding, testing and implementation Understand the functional design as well as technical design and architecture to be implemented on the ERP system Customize, extend, modify, localize or integrate to the existing product by virtue of coding, testing & production Implement the business processes, requirements and the underlying ERP technology to translate them into ERP solutions Write code as per the developmental standards to decide upon the implementation methodology Provide product support and maintenance to the clients for a specific ERP solution and resolve the day to day queries/ technical problems which may arise Create and deploy automation tools/ solutions to ensure process optimization and increase in efficiency Sink between technical and functional requirements of the project and provide solutioning/ advise to the client or internal teams accordingly Support on-site manager with the necessary details wrt any change and off-site support ? 2. Skill upgradation and competency building Clear wipro exams and internal certifications from time to time to upgrade the skills Attend trainings, seminars to sharpen the knowledge in functional/ technical domain Write papers, articles, case studies and publish them on the intranet ? Deliver No. Performance Parameter Measure 1. Contribution to customer projects Quality, SLA, ETA, no. of tickets resolved, problem solved, # of change requests implemented, zero customer escalation, CSAT 2. Automation Process optimization, reduction in process/ steps, reduction in no. of tickets raised 3. Skill upgradation # of trainings & certifications completed, # of papers, articles written in a quarter ? Mandatory Skills: Azure Data Factory. Experience8-10 Years. Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 months ago

Apply

2 - 7 years

20 - 25 Lacs

Hyderabad

Work from Office

Overview At Pepsico were redefining operational excellence with a data-driven mindset, and our Global IT team is at the forefront of this transformation. Our technology teams leverage advanced analytics to deliver predictive insights, enhance operational efficiency, and create unmatched consumer and customer experiences. Our culture is guided by our core values which define our mission to excel in the marketplace and act with integrity in everything we do. Were creating value with every initiative while promoting a sustainable and socially impactful agenda. Responsibilities Key Areas Predictive AI-based Operations ServiceNow Now Assist at Service Desk and Digital Experience Descriptive Analytics and Insights generation on ServiceNow Data Azure Cloud, Data Architecture and Azure ML Services for Global Service Desk, IT Service Management (ITSM) and Global Workplace Leadership & Stakeholder Management Responsibilities Predictive Ops and IT Experience ManagementLeverage your extensive domain expertise in ServiceNow ITSM, Service Desk Management, and End User Experience Management to identify areas for improvement and opportunities for AI and Predictive IT Ops applications, and building capabilities and optimizing Workplace Efficiency Azure Machine LearningLead the exploration and identification of Predictive and Forecasting use cases specifically tailored to the ServiceNow platform, focusing on maximizing business impact and user adoption using Azure stack. Utilize Azure Machine Learning to develop and deploy predictive models, ensuring integration with Azure services and seamless operationalization. Product ManagementPrioritize and manage the Digital Brain (i.e. AI use cases) product backlog, ensuring the timely delivery of predictive models, features, and improvements. Oversee the release of high-quality predictive solutions that meet organizational goals. Leadership and ManagementPartner with the leadership to develop a strategic roadmap for applying AI and Predictive capabilities across ITSM, Service Desk, and Digital Experience functions leveraging ServiceNow data. Stakeholder CollaborationCollaborate extensively with stakeholders to understand pain points and opportunities, translating business needs into precise user stories and actionable tasks. Ensure clear communication and alignment between business objectives and technical implementation Lead other team members in the different digital projects acting as a data science lead for the project. Act as a subject matter expert across different digital projects. Act as stream leader in innovation activities Partner with product managers in taking DS requirements and assessing DS components in roadmaps. Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Lead ML engineers working on industrialization. Coordinate work activities with Business teams, other IT services and as required. Drive the use of the Platform toolset and to also focus on 'the art of the possible' demonstrations to the business as needed. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Experience in cloud-based development and deployment (Azure preferred) Set KPIs and metrics to evaluate analytics solution given a particular use case. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Experience in leading contractors or other team members Qualifications EducationBachelors or Masters degree in computer science, Information Systems, or a related field. ExperienceExtensive experience (12+ years) in ITSM / Service Desk Transformation / IT Operations arena with exposure to predictive intelligence, data architecture, data modelling, and data engineering, with a focus on Azure cloud-based solutions. Technical Skills: Knowledge of Azure cloud services, including Azure Data Factory (ADF), Azure Data Lake Storage (ADLS), Azure Databricks (ADB), and Azure Machine Learning. Domain KnowledgeDeep understanding of ServiceNow modules, specifically ITSM related to incident, problem, request, change management coupled with extensive knowledge of predictive analytics, data science principles. Understanding and visibility into IT Operations and Support services including, Global Workplace Services like End user compute, Workplace management solutions, Unified Communications and Collaborations is an added advantage. Analytical Skills: Outstanding analytical and problem-solving skills to translate extensive business experience into highly effective predictive intelligence solutions. CommunicationExceptional communication and interpersonal skills, honed through years of collaboration with diverse stakeholders and vendors. MethodologiesExtensive experience with agile methodologies and a record of working in highly dynamic and agile development environments. Project ManagementProven ability to manage multiple projects concurrently, prioritizing tasks effectively to drive impactful results. LeadershipDemonstrated leadership and management capabilities, with a track record of guiding teams to achieve strategic goals and fostering a collaborative team environment. Strong Knowledge in Statistical/ML/AI techniques to solve supervised (regression, classification) and unsupervised problems, with focus on time series forecasting. Experiences with Deep Learning are a plus. Functional Knowledge at least one of these IT Service Management (ITSM) IT Service Desk ServiceNow (ITSM Module) Digital Workplace Services Technical Knowledge Azure Machine Learning (AML) Mandatory Azure Databricks (ADB) Mandatory Azure Data Factory (ADF) Optional Azure Data Lake Storage (ADLS) Optional Certifications at least one of these Azure Fundamentals (AI-900) Azure AI Engineer Azure Data Scientist ITIL Foundation or above

Posted 2 months ago

Apply

5 - 7 years

8 - 14 Lacs

Bengaluru

Work from Office

Job Title : Sr. Data Engineer Ontology & Knowledge Graph Specialist Department : Platform Engineering Summary : We are seeking a highly skilled Data Engineer with expertise in ontology development and knowledge graph implementation. This role will be pivotal in shaping our data infrastructure and ensuring the accurate representation and integration of complex data sets. You will leverage industry best practices, including the Basic Formal Ontology (BFO) and Common Core Ontologies (CCO), to design, develop, and maintain ontologies, semantic and syntactic data models, and knowledge graphs on the Databricks Data Intelligence Platform that drive data-driven decision-making and innovation within the company. Responsibilities : Ontology Development : - Design and implement ontologies based on BFO and CCO principles, ensuring alignment with business requirements and industry standards. - Collaborate with domain experts to capture and formalize domain knowledge into ontological structures. - Develop and maintain comprehensive ontologies to model various business entities, relationships, and processes. Data Modeling : - Design and implement semantic and syntactic data models that adhere to ontological principles. - Create data models that are scalable, flexible, and adaptable to changing business needs. - Integrate data models with existing data infrastructure and applications. Knowledge Graph Implementation : - Design and build knowledge graphs based on ontologies and data models. - Develop algorithms and tools for knowledge graph population, enrichment, and maintenance. - Utilize knowledge graphs to enable advanced analytics, search, and recommendation systems. Data Quality And Governance : - Ensure the quality, accuracy, and consistency of ontologies, data models, and knowledge graphs. - Define and implement data governance processes and standards for ontology development and maintenance. Collaboration And Communication : - Work closely with data scientists, software engineers, and business stakeholders to understand their data requirements and provide tailored solutions. - Communicate complex technical concepts clearly and effectively to diverse audiences. Qualifications : Education : - Bachelor's or Master's degree in Computer Science, Data Science, or a related field. Experience : - 5+ years of experience in data engineering or a related role. - Proven experience in ontology development using BFO and CCO or similar ontological frameworks. - Strong knowledge of semantic web technologies, including RDF, OWL, SPARQL, and SHACL. - Proficiency in Python, SQL, and other programming languages used for data engineering. - Experience with graph databases (e.g., TigerGraph, JanusGraph) and triple stores (e.g., GraphDB, Stardog) is a plus. Desired Skills : - Familiarity with machine learning and natural language processing techniques. - Experience with cloud-based data platforms (e.g., AWS, Azure, GCP). - Experience with Databricks technologies including Spark, Delta Lake, Iceberg, Unity Catalog, UniForm, and Photon. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills.

Posted 2 months ago

Apply

6 - 11 years

4 - 9 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply

11 - 14 years

35 - 50 Lacs

Chennai

Work from Office

Role: MLOps Engineer Location: PAN India Key words -Skillset AWS SageMaker, Azure ML Studio, GCP Vertex AI PySpark, Azure Databricks MLFlow, KubeFlow, AirFlow, Github Actions, AWS CodePipeline Kubernetes, AKS, Terraform, Fast API Responsibilities Model Deployment, Model Monitoring, Model Retraining Deployment pipeline, Inference pipeline, Monitoring pipeline, Retraining pipeline Drift Detection, Data Drift, Model Drift Experiment Tracking MLOps Architecture REST API publishing Job Responsibilities: Research and implement MLOps tools, frameworks and platforms for our Data Science projects. Work on a backlog of activities to raise MLOps maturity in the organization. Proactively introduce a modern, agile and automated approach to Data Science. Conduct internal training and presentations about MLOps tools’ benefits and usage. Required experience and qualifications: Wide experience with Kubernetes. Experience in operationalization of Data Science projects (MLOps) using at least one of the popular frameworks or platforms (e.g. Kubeflow, AWS Sagemaker, Google AI Platform, Azure Machine Learning, DataRobot, DKube). Good understanding of ML and AI concepts. Hands-on experience in ML model development. Proficiency in Python used both for ML and automation tasks. Good knowledge of Bash and Unix command line toolkit. Experience in CI/CD/CT pipelines implementation. Experience with cloud platforms - preferably AWS - would be an advantage.

Posted 2 months ago

Apply

3 - 8 years

13 - 18 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Summary: We are looking for a skilled Azure Data Engineer to join our Data & Analytics team. You will be responsible for building and optimizing our data pipelines, designing and implementing data solutions on Microsoft Azure, and enabling data-driven decision-making across the organization. Key Responsibilities: Design, develop, and maintain scalable data pipelines and data processing systems using Azure Data Factory, Azure Synapse Analytics, and Azure Databricks. Integrate data from various structured and unstructured data sources into a centralized data lake or data warehouse. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver high-quality data solutions. Optimize data flows for performance, reliability, and scalability. Implement and manage data security, privacy, and compliance policies. Monitor and troubleshoot data pipeline issues and ensure system reliability. Leverage DevOps practices for CI/CD pipelines using tools like Azure DevOps or GitHub Actions. Document data flows, architecture, and data models. Required Skills & Qualifications: Bachelors or Master’s degree in Computer Science, Information Systems, or a related field. 4+ years of experience in data engineering or a similar role. Hands-on experience with Azure services such as: Azure Data Factory Azure Synapse Analytics Azure Data Lake Storage Gen2 Azure SQL Database or Azure SQL Managed Instance Azure Databricks (preferred) Proficiency in SQL, Python, and/or PySpark for data transformation. Experience with data modeling, ETL/ELT processes, and data integration. Strong understanding of data governance, security, and compliance in the cloud. Familiarity with version control systems (e.g., Git) and CI/CD practices. Preferred Qualifications: Microsoft Certified: Azure Data Engineer Associate or equivalent certification. Experience with real-time data processing using Azure Stream Analytics or Apache Kafka. Knowledge of Power BI and data visualization best practices. Experience working in Agile or Scrum development environments.

Posted 2 months ago

Apply

4 - 6 years

4 - 8 Lacs

Bengaluru

Work from Office

Role & responsibilities Note: Job Type: Contract (6 Months - 1 Year) Job Location: Electronic City, Phase-1, Bangalore Mode of work: Work from Office Job Description: Proficiency in Databricks, Apache Spark, and Delta Lake. Strong understanding of cloud platforms such as AWS, Azure, or GCP. Experience with SQL, Python, Scala, and/or R. Familiarity with data warehousing concepts and ETL processes. Problem-Solving: Excellent analytical and problem-solving skills with a keen attention to detail. Data bricks Associate certification Preferred candidate profile Looking for Immediate - 30 Days Joiners only

Posted 2 months ago

Apply

12 - 14 years

18 - 22 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Description: As a Data Engineer, you will design, develop, and maintain data solutions for data generation, collection, and processing. You will create data pipelines, ensure data quality, and implement ETL processes to migrate and deploy data across systems. Your day will involve working on various data projects and collaborating with cross-functional teams to drive data initiatives. Roles & Responsibilities: Serve as a Subject Matter Expert (SME) and manage the team to deliver results. Take responsibility for team-level decisions and performance. Collaborate with multiple teams and contribute to strategic technical decisions. Provide scalable and effective solutions for both team-specific and cross-team challenges. Lead the design and development of data solutions and infrastructure. Implement and optimize data pipelines for efficient and scalable data processing. Ensure data quality, consistency, and integrity throughout the data lifecycle. Partner with stakeholders to gather requirements and deliver data-driven solutions. Professional & Technical Skills: Must-Have Skills: Proficiency in Microsoft Azure Databricks Strong understanding of cloud-based data engineering concepts Experience with big data technologies such as Hadoop and Spark Solid knowledge of data modeling and database design principles Hands-on experience with SQL and NoSQL databases. Experience Required: Minimum of 5 years of experience working with Microsoft Azure Databricks Educational Qualifications: 15 years of full-time education Bachelors or Masters degree in Computer Science, Engineering, Data Science, or a related field Additional Information: This role is based in our Mumbai office Candidates should be comfortable working in a collaborative, fast-paced environment

Posted 2 months ago

Apply

12 - 15 years

18 - 30 Lacs

Pune, Bengaluru, Delhi / NCR

Hybrid

Role & responsibilities Must-Have Skills: Expertise in Databricks Unified Data Analytics Platform Strong hands-on experience with Spark/PySpark Proficient in Python, Scala, or SQL Experience in cloud-based analytics and scalable data architecture Solid understanding of data modeling, ETL pipelines, and performance tuning Preferred candidate profile

Posted 2 months ago

Apply

7 - 12 years

19 - 25 Lacs

Hyderabad

Hybrid

Data Engineer Consultant Position Overview: We are seeking a highly skilled and experienced Data Engineering to join our team. The ideal candidate will have a strong background in programming, data management, and cloud infrastructure, with a focus on designing and implementing efficient data solutions. This role requires a minimum of 5+ years of experience and a deep understanding of Azure services and infrastructure and ETL/ELT solutions. Key Responsibilities: Azure Infrastructure Management: Own and maintain all aspects of Azure infrastructure, recommending modifications to enhance reliability, availability, and scalability. Security Management: Manage security aspects of Azure infrastructure, including network, firewall, private endpoints, encryption, PIM, and permissions management using Azure RBAC and Databricks roles. Technical Troubleshooting: Diagnose and troubleshoot technical issues in a timely manner, identifying root causes and providing effective solutions. Infrastructure as Code: Create and maintain Azure Infrastructure as Code using Terraform and GitHub Actions. CI/CD Pipelines: Configure and maintain CI/CD pipelines using GitHub Actions for various Azure services such as ADF, Databricks, Storage, and Key Vault. Programming Expertise: Utilize your expertise in programming languages such as Python to develop and maintain data engineering solutions. Generative AI and Language Models: Knowledge of Language Models (LLMs) and Generative AI is a plus, enabling the integration of advanced AI capabilities into data workflows. Real-Time Data Streaming: Use Kafka for real-time data streaming and integration, ensuring efficient data flow and processing. Data Management: Proficiency in Snowflake for data wrangling and management, optimizing data structures for analysis. DBT Utilization: Build and maintain data marts and views using DBT, ensuring data is structured for optimal analysis. ETL/ELT Solutions: Design ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks, leveraging methodologies to acquire data from various structured or semi-structured source systems. Communication: Strong communication skills to explain technical issues and solutions clearly to the Engineering Lead and key stakeholders (as required) Qualifications: Minimum of 5+ years of experience in designing ETL/ELT solutions using tools like Azure Data Factory and Azure Databricks.OR Snowflake Expertise in programming languages such as Python. Experience with Kafka for real-time data streaming and integration. Proficiency in Snowflake for data wrangling and management. Proven ability to use DBT to build and maintain data marts and views. Experience in creating and maintaining Azure Infrastructure as Code using Terraform and GitHub Actions. Ability to configure, set up, and maintain GitHub for various code repositories. Experience in creating and configuring CI/CD pipelines using GitHub Actions for various Azure services. In-depth understanding of managing security aspects of Azure infrastructure. Strong problem-solving skills and ability to diagnose and troubleshoot technical issues. Excellent communication skills for explaining technical issues and solutions.

Posted 2 months ago

Apply

4 - 8 years

0 - 0 Lacs

Bengaluru

Work from Office

This position requires someone with good problem solving, business understanding and client presence. Overall professional experience of the candidate should be atleast 5 years with a maximum experience upto 15 years. The candidate must understand the usage of data Engineering tools for solving business problems and help clients in their data journey. Must have knowledge of emerging technologies used in companies for data management including data governance, data quality, security, data integration, processing, and provisioning. The candidate must possess required soft skills to work with teams and lead medium to large teams. Candidate should be comfortable with taking leadership roles, in client projects, pre-sales/consulting, solutioning, business development conversations, execution on data engineering projects. Role Description: Developing Modern Data Warehouse solutions using Databricks and Azure Stack Ability to provide solutions that are forward-thinking in data engineering and analytics space Collaborate with DW/BI leads to understand new ETL pipeline development requirements. Triage issues to find gaps in existing pipelines and fix the issues Work with business to understand the need in reporting layer and develop data model to fulfill reporting needs Drive technical discussion with client architect and team members Orchestrate the data pipelines in scheduler via Airflow Skills and Qualifications: Bachelor's and/or master's degree in computer science or equivalent experience. Good understanding of Databricks Data & AI platform and Databricks Delta Lake Architecture Should have hands-on experience in SQL, Python and Spark (PySpark) Experience in building ETL / data warehouse transformation processes Proficiency in RDBMS, complex SQL, PL/SQL, Unix Shell Scripting, performance tuning and troubleshoot Databricks Certified Data Engineer Associate/Professional Certification (Desirable). Comfortable working in a dynamic, fast-paced, innovative environment with several ongoing concurrent projects Should have experience working in Agile methodology Strong verbal and written communication skills. Strong analytical and problem-solving skills with a high attention to detail. Mandatory Skills Azure Databricks, Pyspark, Azure Data Factory, Azure Data Lake. Job Location - Bangalore, Chennai, Gurgaon, Pune, Kolkata Required Skills Azure Databricks, Pyspark, Azure Data Factory

Posted 2 months ago

Apply

8 - 13 years

17 - 22 Lacs

Bengaluru

Work from Office

Overview We are seeking a highly skilled and experienced Azure open AI Archirtect to join our growing team. You will play a key role in designing, developing, and implementing Gen AI solutions across various domains, including chatbots. The ideal candidate for this role will have experience with latest natural language processing, generative AI technologies and the ability to produce diverse content such as text, audio, images, or video. You will be responsible for integrating general-purpose AI models into our systems and ensuring they serve a variety of purposes effectively. Task and Responsibilities Collaborate with cross-functional teams to design and implement Gen AI solutions that meet business requirements. Developing, training, testing, and validating the AI system to ensure it meets the required standards and performs as intended. Design, develop, and deploy Gen AI solutions using advanced LLMs like OpenAI Models, Open Source LLMs ( Llama2 , Mistral ,"), and frameworks like Langchain and Pandas . Leverage expertise in Transformer/Neural Network models and Vector/Graph Databases to build robust and scalable AI systems. Integrate AI models into existing systems to enhance their capabilities. Creating data pipelines to ingest, process, and prepare data for analysis and modeling using Azure services, such as Azure AI Document Intelligence , Azure Databricks etc. Integrate speech-to-text functionality using Azure native services to create user-friendly interfaces for chatbots. Deploying and managing Azure services and resources using Azure DevOps or other deployment tools. Monitoring and troubleshooting deployed solutions to ensure optimal performance and reliability. Ensuring compliance with security and regulatory requirements related to AI solutions. Staying up-to-date with the latest Azure AI technologies and industry developments, and sharing knowledge and best practices with the team. Qualifications Overall 8+ years"™ combined experience in IT and recent 5 years as AI engineer. Bachelor's or master's degree in computer science, information technology, or a related field. Experience in designing, developing, and delivering successful GenAI Solutions. Experience with Azure Cloud Platform and Azure AI services such as Azure AI Search, Azure OpenAI , Document Intelligence, Speech, Vision , etc. Experience with Azure infrastructure and solutioning. Familiarity with OpenAI Models, Open Source LLMs, and Gen AI frameworks like Langchain and Pandas . Solid understanding of Transformer/Neural Network architectures and their application in Gen AI. Hands-on experience with Vector/Graph Databases and their use in semantic & vector search. Proficiency in programming languages like Python (essential). Relevant industry certifications, such as Microsoft CertifiedAzure AI Engineer, Azure Solution Architect etc. is a plus. Excellent problem-solving, analytical, and critical thinking skills. Strong communication and collaboration skills to work effectively in a team environment. A passion for innovation and a desire to push the boundaries of what's possible with Gen AI.

Posted 2 months ago

Apply

2 - 7 years

6 - 16 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Exciting Azure developer Job Opportunity at Infosys! We are looking for skilled Azure Developers to join our dynamic team PAN INDIA. If you have a passion for technology and a minimum of 2 to 9 years of hands-on experience in azure development, this is your chance to make an impact. At Infosys, we value innovation, collaboration, and diversity. We believe that a diverse workforce drives creativity and fosters a richer company culture. Therefore, we strongly encourage applications from all genders and backgrounds. Ready to take your career to the next level? Join us in shaping the future of technology. Visit our careers page for more details on how to apply.

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Bengaluru

Work from Office

Job Title: Senior Data Engineer Location: Bengaluru, India Experience: 5-10 Years Notice period : Immediate Key Responsibilities Design, develop, and maintain scalable data pipelines for efficient data processing. Build and optimize data storage solutions, ensuring high performance and reliability. Implement ETL processes to extract, transform, and load data from various sources. Work closely with data analysts and scientists to support their data needs. Optimize database structures and ensure data integrity. Develop and manage cloud-based data architectures (AWS, Azure, or Google Cloud). Ensure compliance with data governance and security standards. Monitor and troubleshoot data workflows to maintain system efficiency. Required Skills & Qualifications Strong proficiency in SQL, Python, and R for data processing. Experience with big data technologies like Hadoop, Spark, and Kafka. Hands-on expertise in ETL tools and data warehousing solutions . Deep understanding of database management systems (MySQL, PostgreSQL, MongoDB, etc.). Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. Strong problem-solving and communication skills to collaborate with cross-functional teams.

Posted 2 months ago

Apply

8 - 12 years

25 - 30 Lacs

Mumbai

Work from Office

Job Opening: Azure Databricks Data Engineer Immediate Joiners Location: Mumbai - Thane Experience: 8-12 years Notice Period: Immediate joiners preferred (within 1 week) Key Requirements: Cloud & Analytics: 812 years of experience in developing cloud-based analytics solutions , with minimum 4 years of hands-on experience in Azure and Databricks . Data Engineering Expertise: Proven experience in architecture, design, and development of data platforms / data lakes using ETL/ELT approaches. Experience in Databricks Delta Lakehouse and real-time data workflows using (Py)Spark Structured Streaming . Programming & Tools: Proficient in Python, PySpark, SQL . Strong grasp of CI/CD methodologies for data pipelines. Comfortable with security-first development principles . Data Modelling: Expertise in data modeling using RDBMS , preferably PostgreSQL . Soft Skills: Strong development , problem-solving , and solution architecture skills. Urgency: Need immediate joiners , able to onboard within 1 week . Interested candidates pls share cv on ashwini.dabir@anveta.com

Posted 2 months ago

Apply

3 - 8 years

4 - 8 Lacs

Bengaluru

Work from Office

Diverse Lynx is looking for DAAI-wipro Azure Data Engineer to join our dynamic team and embark on a rewarding career journey Assure that data is cleansed, mapped, transformed, and otherwise optimised for storage and use according to business and technical requirements Solution design using Microsoft Azure services and other tools The ability to automate tasks and deploy production standard code (with unit testing, continuous integration, versioning etc.) Load transformed data into storage and reporting structures in destinations including data warehouse, high speed indexes, real-time reporting systems and analytics applications Build data pipelines to collectively bring together data Other responsibilities include extracting data, troubleshooting and maintaining the data warehouse Azure Data Engineer with snowflake

Posted 2 months ago

Apply

7 - 9 years

9 - 12 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Experience: 7+ Years Required Qualification: B.Tech / MCA / Equivalent in Computer Science, IT, or Engineering Primary Skills: Azure Data Factory (ADF) Data pipeline creation, data flow management Power BI Report & dashboard development, DAX, data modeling SQL Strong experience with Azure SQL, SQL Server, query optimization ETL Processes Design and implementation of scalable ETL pipelines Cloud Platforms Microsoft Azure (Azure Data Lake, Azure Blob Storage) Programming Python and SQL for data transformation Data Warehousing Concepts Hands-on understanding of data architecture Performance Tuning Optimize data pipelines for reliability and efficiency. Preferred Skills: Azure Databricks Data engineering in distributed environments Microsoft Fabric Exposure to end-to-end analytics platform Data Modeling Best practices in Power BI and enterprise data environments Cloud Ecosystem Knowledge Broader understanding of Azure services What We Need: Educational Background: B.Tech / MCA / Equivalent in Computer Science, Information Technology, or related field 7+ years of hands-on experience in Data Engineering / BI Share Your Resume With: Current CTC Expected CTC Preferred Location Contact no. - 9032956160

Posted 2 months ago

Apply

5 - 8 years

10 - 15 Lacs

Bengaluru

Work from Office

Key Responsibilities: Develop and implement data pipelines using Azure Data Factory and Databricks. Work with stakeholders to gather requirements and translate them into technical solutions. Migrate data from Oracle to Azure Data Lake. Optimize data processing workflows for performance and scalability. Ensure data quality and integrity throughout the data lifecycle. Collaborate with data architects and other team members to design and implement data solutions. Required Skills: Strong experience with Azure Data Services, including Azure Data Factory, Synapse Analytics, and Databricks. Proficiency in data transformation and ETL processes. Hands-on experience with Oracle to Azure Data Lake migrations is a plus. Strong problem-solving and analytical skills. Optimize performance and cost efficiency for Databricks clusters, data pipelines, and storage systems Monitor and manage cloud resources to ensure high availability, performance and scalability Prepare architecture diagrams, technical documentation, and runbooks for the deployed solutions. Excellent communication and teamwork skills.

Posted 2 months ago

Apply

6 - 10 years

10 - 20 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Job Title: AZURE ADF With Databrick and PYSPARK Experience: 5 - 8 Years Category: Software Development/ Engineering Main location: India, Karnataka, Bangalore/Hyderabad/Chennai We are looking for an experienced " AZURE ADF With Databrick and PYSPARK" to join our team. The ideal candidate should be passionate about coding and developing scalable and high-performance applications. You will work closely with our front-end developers, designers, and other members of the team to deliver quality solutions that meet the needs of our clients. Qualification: Bachelor's degree in Computer Science or related field or higher with minimum 3 years of relevant experience. Your future duties and responsibilities Experience with: 5+ years' experience designing, developing, implementing and supporting Enterprise Complex projects. Experience in most of the following: Languages (Python, PySpark, SparkSQL), Databases, Azure Cloud, ETL (Databricks, Azure Data Factory). Tools (source control/issue management: Azure DevOps, Git, Jenkins, JIRA), Application Health-check and Innovation, DevOps adoption. Have a solid understanding of: Designing and implementing applications for Data Integration in Azure cloud, Strong knowledge of Agile methodology and experience working in a Scrum team. Have the ability to make judgments regarding the ongoing viability of development phase or strategy based on ability to deliver upon agreed functionality within approved budget and schedule. It is an advantage if you have experience within an Agile environment. Required qualifications to be successful in this role • Driven by collective success. You know that collaboration can transform a good idea into a great one. You understand the power of a team that enjoys working together to create a shared vision. Act like an owner. You thrive when you're empowered to take the lead, go above and beyond, and deliver results. Understand that success is in the details. You notice things that others don't. Your critical thinking skills help to inform your decision making. Embrace and champion change. You'll continuously evolve your thinking and the way you work in order to deliver your best. Put our clients first. You engage with purpose to find the right solutions. You go the extra mile, because it's the right thing to do. Values matters: - trust, teamwork, and accountability

Posted 2 months ago

Apply

9 - 12 years

0 - 0 Lacs

Bengaluru

Work from Office

Senior Data Engineer Job Summary: We are seeking an experienced and highly motivated Senior Azure Data Engineer to join a Data & Analytics team. The ideal candidate will be a hands-on technical leader responsible for designing, developing, implementing, and managing scalable, robust, and secure data solutions on the Microsoft Azure platform. This role involves leading a team of data engineers, setting technical direction, ensuring the quality and efficiency of data pipelines, and collaborating closely with data scientists, analysts, and business stakeholders to meet data requirements. Key Responsibilities: Lead, mentor, and provide technical guidance to a team of Azure Data Engineers. Design, architect, and implement end-to-end data solutions on Azure, including data ingestion, transformation, storage (lakes/warehouses), and serving layers. Oversee and actively participate in the development, testing, and deployment of robust ETL/ELT pipelines using key Azure services. Establish and enforce data engineering best practices, coding standards, data quality checks, and monitoring frameworks. Ensure data solutions are optimized for performance, cost, scalability, security, and reliability. Collaborate effectively with data scientists, analysts, and business stakeholders to understand requirements and deliver effective data solutions. Manage, monitor, and troubleshoot Azure data platform components and pipelines. Contribute to the strategic technical roadmap for the data platform. Qualifications & Experience: Experience: Minimum 6-8+ years of overall experience in data engineering roles. Minimum 3-4+ years of hands-on experience designing, implementing, and managing data solutions specifically on the Microsoft Azure cloud platform. Proven experience (1-2+ years) in a lead or senior engineering role, demonstrating mentorship and technical guidance capabilities. Education: Bachelor's degree in computer science, Engineering, Information Technology, or a related quantitative field (or equivalent practical experience). Technical Skills: Core Azure Data Services: Deep expertise in Azure Data Factory (ADF), Azure Synapse Analytics (SQL Pools, Spark Pools), Azure Databricks, Azure Data Lake Storage (ADLS Gen2). Data Processing & Programming: Strong proficiency with Spark (using PySpark or Scala) and expert-level SQL skills. Proficiency in Python is highly desired. Data Architecture & Modelling: Solid understanding of data warehousing principles (e.g., Kimball), dimensional modelling, ETL/ELT patterns, and data lake design. Databases: Experience with relational databases (e.g., Azure SQL Database) and familiarity with NoSQL concepts/databases is beneficial. Version Control: Proficiency with Git for code management. Leadership & Soft Skills: Excellent leadership, mentoring, problem-solving, and communication skills, with the ability to collaborate effectively across various teams. Required Skills # Azure Component Proficiency 1 Azure Synapse Analytics High 2 Azure Data Factory High 3 Azure SQL High 4 ADLS Storage High 5 Azure Devops - CICD High 6 Azure Databricks Medium - High 7 Azure Logic App Medium - High 8 Azure Fabric Good to Have, not mandatory 9 Azure Functions Good to Have, not mandatory 10 Azure Purview Good to Have, not mandatory 2. Good experience in Data extraction patterns via ADF - API , Files, Databases. 3. Data Masking in Synapse, RBAC 4. Experience in Data warehousing - Kimbal Modelling. 5. Good communication and collaboration skills.

Posted 2 months ago

Apply

7 - 12 years

9 - 14 Lacs

Indore, Hyderabad, Ahmedabad

Work from Office

Required Qualification: B.Tech / MCA / Equivalent in Computer Science, IT, or Engineering Primary Skills: Azure Data Factory (ADF) Data pipeline creation, data flow management Power BI Report & dashboard development, DAX, data modeling SQL Strong experience with Azure SQL, SQL Server, query optimization ETL Processes Design and implementation of scalable ETL pipelines Cloud Platforms Microsoft Azure (Azure Data Lake, Azure Blob Storage) Programming Python and SQL for data transformation Data Warehousing Concepts Hands-on understanding of data architecture Performance Tuning Optimize data pipelines for reliability and efficiency Preferred Skills: Azure Databricks Data engineering in distributed environments Microsoft Fabric Exposure to end-to-end analytics platform Data Modeling Best practices in Power BI and enterprise data environments Cloud Ecosystem Knowledge Broader understanding of Azure services What We Need: Educational Background: B.Tech / MCA / Equivalent in Computer Science, Information Technology, or related field 7+ years of hands-on experience in Data Engineering / BI

Posted 2 months ago

Apply

5 - 10 years

10 - 20 Lacs

Mohali

Remote

As a Senior Data Engineer , you will support the Global BI team for Isolation Valves in migrating to Microsoft Fabric . Your role focuses on data gathering, modeling, integration, and database design to enable efficient data management. You will develop and optimize scalable data models to support analytics and reporting needs. Leverage Microsoft Fabric and Azure technologies for high-performance data processing. In this Role, Your Responsibilities Will Be: Collaborate with cross-functional teams, including data analysts, data scientists, and business stakeholders, to understand their data requirements and deliver effective solutions. Leverage Fabric Lakehouse for data storage, governance, and processing to support Power BI and automation initiatives. Expertise in data modeling, with a strong focus on data warehouse and lakehouse design. Design and implement data models, warehouses, and databases using MS Fabric, Azure Synapse Analytics, Azure Data Lake Storage, and other Azure services. Develop ETL (Extract, Transform, Load) processes using SQL Server Integration Services (SSIS), Azure Synapse Pipelines, or similar tools to prepare data for analysis and reporting. Implement data quality checks and governance practices to ensure accuracy, consistency, and security of data assets. Monitor and optimize data pipelines and workflows for performance, scalability, and cost efficiency, utilizing Microsoft Fabric for real-time analytics and AI-powered workloads. Strong proficiency in Business Intelligence (BI) tools such as Power BI, Tableau, and other analytics platforms. Experience with data integration and ETL tools like Azure Data Factory. Proven expertise in Microsoft Fabric or similar data platforms. In-depth knowledge of the Azure Cloud Platform, particularly in data warehousing and storage solutions. Strong problem-solving skills with a track record of resolving complex technical challenges. Excellent communication skills, with the ability to convey technical concepts to both technical and non-technical stakeholders. Ability to work independently and collaboratively within a team environment. Microsoft certifications in data-related fields are preferred. DP-700 (Microsoft Certified: Fabric Data Engineer Associate) is a plus. Who You Are: You show a tremendous amount of initiative in tough situations; are exceptional at spotting and seizing opportunities. You observe situational and group dynamics and select best-fit approach. You make implementation plans that allocate resources precisely. You pursue everything with energy, drive, and the need to finish. For This Role, You Will Need: Experience: 5+ years in Data Warehousing with on-premises or cloud technologies. Analytical & Problem-Solving Skills: Strong analytical abilities with a proven track record of resolving complex data challenges. Communication Skills: Ability to effectively engage with internal customers across various functional areas. Database & SQL Expertise: Proficient in database management, SQL query optimization, and data mapping. Excel Proficiency: Strong knowledge of Excel, including formulas, filters, macros, pivots, and related operations. MS Fabric Expertise: Extensive experience with Fabric components, including Lakehouse, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Models. Programming Skills: Proficiency in Python and SQL/Advanced SQL for data transformations/Debugging. Flexibility: Willingness to work flexible hours based on project requirements. Technical Documentation: Strong documentation skills for maintaining clear and structured records. Language Proficiency: Fluent in English. SQL & Data Modeling: Advanced SQL skills, including experience with complex queries, data modeling, and performance tuning. Medallion Architecture: Hands-on experience implementing Medallion Architecture for data processing. Database Experience: Working knowledge of Oracle, SAP, or other relational databases. Manufacturing Industry Experience: Prior experience in a manufacturing environment is strongly preferred. Learning Agility: Ability to quickly learn new business areas, software, and emerging technologies. Leadership & Time Management: Strong leadership and organizational skills, with the ability to prioritize, multitask, and meet deadlines. Confidentiality: Ability to handle sensitive and confidential information with discretion. Project Management: Capable of managing both short- and long-term projects effectively. Cross-Functional Collaboration: Ability to work across various organizational levels and relationships. Strategic & Tactical Thinking: Ability to balance strategic insights with hands-on execution. ERP Systems: Experience with Oracle, SAP, or other ERP systems is a plus. Travel Requirements: Willing to travel up to 20% as needed. Preferred Qualifications that Set You Apart: Education: BA/BS/B.E./B.Tech in Business, Information Systems, Technology, or a related field. Technical Background: Bachelors degree or equivalent in Science, with a focus on MIS, Computer Science, Engineering, or a related discipline. Communication Skills: Strong interpersonal skills in English (spoken and written) to collaborate effectively with overseas teams. Database & SQL Expertise: Proficiency in Oracle PL/SQL. Azure Experience: Hands-on experience with Azure services, including Azure Synapse Analytics and Azure Data Lake. DevOps & Agile: Practical experience with Azure DevOps, along with knowledge of Agile and Scrum methodologies. Certifications: Agile certification is preferred.

Posted 2 months ago

Apply

4 - 7 years

12 - 19 Lacs

Gurugram, Mumbai (All Areas)

Hybrid

Minimum experience of working in projects as an Azure Data Engineer. B.Tech/B.E degree in Computer Science or Information Technology. Experience with enterprise integration tools and ETL (extract, transform, load) tools like Data Bricks, Azure Data factory, and Talend/Informatica, etc. Analyzing Data using python, Spark streaming, SSIS/Informatica Batch ETL and data base tools like SQL, Mongo DB for processing data from different sources. Experience with Platform Automation tools (DB management, Azure, Jenkins, GitHub) will be an added advantage. Design, operate, and integrate Different systems to enable efficiencies in key areas of the business Understand Business Requirements, Interacting with business users and or reverse engineering existing data products Good Understanding and working knowledge of distributed databases and Pipelines Ability to analyze and identify the root cause for technical issues. Proven ability to use relevant data analytics approaches and tools to problem solve and trouble shoot. Excellent documentation and communication skills.

Posted 2 months ago

Apply

10 - 15 years

15 - 18 Lacs

Hyderabad

Work from Office

Skilled in data modeling (ER/Studio, Erwin), MPP DBs (Databricks, Snowflake), GitHub, CI/CD, metadata/lineage, agile/DevOps, SAP HANA/S4, and retail data (IRI, Nielsen). Mail:kowsalya.k@srsinfoway.com

Posted 2 months ago

Apply

11 - 14 years

35 - 40 Lacs

Hyderabad

Work from Office

What PepsiCo Data Management and Operations does: Maintain a predictable, transparent, global operating rhythm that ensures always-on access to high-quality data for stakeholders across the company. Responsible for day-to-day data collection, transportation, maintenance/curation, and access to the PepsiCo corporate data asset Work cross-functionally across the enterprise to centralize data and standardize it for use by business, data science or other stakeholders. Increase awareness about available data and democratize access to it across the company. As a Data Engineering Associate Manager, you will be the key technical expert overseeing PepsiCo's data product build & operations and drive a strong vision for how data engineering can proactively create a positive impact on the business. You'll be empowered to create & lead a strong team of data engineers who build data pipelines into various source systems, rest data on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As a member of the data engineering team, you will help lead the development of very large and complex data applications into public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Provide leadership and management to a team of data engineers, managing processes and their flow of work, vetting their designs, and mentoring them to realize their full potential. Act as a subject matter expert across different digital projects. Overseework with internal clients and external partners to structure and store data into unified taxonomies and link them together with standard identifiers. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality and performance. Responsible for implementing best practices around systems integration, security, performance, and data management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to productionalize data science models. Define and manage SLAs for data products and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications B.Tech in Computer Science, Math, Physics, or other technical fields. 11+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 4+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience in SQL optimization and performance tuning, and development experience in programming languages like Python, PySpark, Scala etc.). 2+ years in cloud data engineering experience in Azure. Fluent with Azure cloud services. Azure Certification is a plus. Experience in Azure Log Analytics Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or Snowflake. Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies