Home
Jobs
Companies
Resume

27 Adls Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

3 - 7 Lacs

Chennai, Tamil Nadu, India

On-site

Foundit logo

Job Description Inviting Application for Azure Data Engineer Experience - 5 to 8 Yrs Joining Location - Chennai JD- Required Technical Skill Set - ADB,ADF 3+ years of relevant experience in Pyspark and Azure Databricks. Proficiency in integrating, transforming, and consolidating data from various structured and unstructured data sources. Good experience in SQL or native SQL query languages. Strong experience in implementing Databricks notebooks using Python. Good experience in Azure Data Factory, ADLS, Storage Services, Serverless architecture, Azure functions.

Posted 1 week ago

Apply

5.0 - 10.0 years

15 - 16 Lacs

Bangalore Rural, Bengaluru

Work from Office

Naukri logo

Experience in designing, building, and managing data solutions on Azure. Design, develop, and optimize big data pipelines and architectures on Azure. Implement ETL/ELT processes using Azure Data Factory, Databricks, and Spark. Required Candidate profile 5yrs of exp in data engineering and big data technologies. Hands-on experience with Azure services (Azure Data Factory, Azure Synapse, Azure SQL, ADLS, etc.). Databricks Certification (Mandatory).

Posted 1 week ago

Apply

6.0 - 8.0 years

7 - 11 Lacs

Gurugram

Work from Office

Naukri logo

DISCOVER your opportunity What will your essential responsibilities include? Possess excellent domain knowledge of Data warehousing technologies, SQL, Data Models to develop test strategies, approaches from Quality Engineering perspective. In close coordination with Project teams help lead all efforts from Quality Engineering perspective. Work with data engineers or data scientists to collect and prepare the necessary test data sets. Ensure the data adequately represents real-world scenarios and covers a diverse range of inputs. Excellent domain knowledge of Data warehousing technologies, SQL, Data Models to build out test strategies and lead projects from Quality Engineering perspective. With an Automation-first mindset, work towards testing of user interfaces such as Business Intelligence solutions and validation of functionalities while constantly looking out for efficiency gains and process improvements. Triage and Prioritization of stories and epics with all stakeholders to ensure optimal deliveries. Engage with various stakeholders like Business Partners, Product Owners, Development and Infrastructure teams to ensure alignments with overall roadmap. Track current progress of testing activities, finding and tracking test metrics, estimating and communicating improvement actions based on the test metrics results and the experience. Automation for processes such as Data Loads, user interfaces such as Business Intelligence solutions and other validations of business KPIs. Adopt and implement best practices towards Documentation of test plan, cases, results in JIRA. Triage and Prioritization of defects with all stakeholders. Leadership accountability for ensuring that every release to customers is fit for purpose, performant. Knowledge on Scaled Agile, Scrum or Kanban methodology. You will report to Lead UAT. SHARE your talent Were looking for someone who has these abilities and skills: Required Skills and Abilities: A minimum of a bachelors or master's degree (preferred) in a relevant discipline Relevant years of excellent testing background, including knowledge/experience in automation. Insurance experience in data, underwriting, claims or operations, including influencing, collaborating, and leading efforts in complex, disparate, and interrelated teams. Excellent Experience with SQL Server, Azure Databricks Notebook, PowerBI, ADLS, CosmosDB, SQL DW Analytics. Should have a robust background in Software development with experience in ingesting, transforming, and storing data from large datasets using Pyspark in Azure Databricks with robust knowledge of distributed computing concepts. Hands-on experience in designing and developing ETL Pipelines in Pyspark in Azure Databricks with robust python scripting. Desired Skills and Abilities: Having experience doing UAT/System Integration testing in the insurance industry Excellent technical testing experience such as API testing, UI automation is a plus. Knowledge/Experience of Testing in cloud-based systems in different data staging layers

Posted 2 weeks ago

Apply

3.0 - 7.0 years

9 - 16 Lacs

Remote, , India

On-site

Foundit logo

Job Role: CDP Data Engineer Why MResult Founded in 2004, MResult is a global digital solutions partner trusted by leading Fortune 500 companies in industries such as pharma & healthcare, retail, and BFSI. MResult's expertise in data and analytics, data engineering, machine learning, AI, and automation help companies streamline operations and unlock business value. As part of our team, you will collaborate with top minds in the industry to deliver cutting-edge solutions that solve real-world challenges. What We Offer: At MResult, you can leave your mark on projects at the world's most recognized brands, access opportunities to grow and upskill, and do your best work with the flexibility of hybrid work models. Great work is rewarded, and leaders are nurtured from within. Our values Agility, Collaboration, Client Focus, Innovation, and Integrity are woven into our culture, guiding every decision. Website: https://mresult.com/ LinkedIn: https://www.linkedin.com/company/mresult/ What This Role Requires In the role of CDP Data Engineer you will be a key contributor to MResult's mission of empowering our clients with data-driven insights and innovative digital solutions. Each day brings exciting challenges and growth opportunities. Here is what you will do: Design, develop, and implement solutions using Customer Data Platform (CDP) to manage and analyze customer data. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Integrate CDP with various data sources and ensure seamless data flow and accuracy. Develop and maintain data pipelines, ensuring data is collected, processed, and stored efficiently. Create and manage customer profiles, segments, and audiences within the CDP. Implement data governance and security best practices to protect customer data. Monitor and optimize the performance of the CDP infrastructure. Provide technical support and troubleshooting for CDP-related issues. Stay updated with the latest trends and advancements in CDP technology and best practices. Key Skills to Succeed in This Role: Overall experience 3-6 yrs Experience in Customer Insights Data. Experience in customer insights journey. Experience in ADLS, ADF & Synapse is a must. Experience in data verse, Power platform and Snowflake. Manage, Master, and Maximize with MResult MResult is an equal-opportunity employer committed to building an inclusive environment free of discrimination and harassment. Take the next step in your career with MResult where your ideas help shape the future.

Posted 3 weeks ago

Apply

6 - 11 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Job Requirement Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

6 - 11 years

15 - 25 Lacs

Kota

Work from Office

Naukri logo

Required Experience, Skills & Competencies: Strong Hands-on experience in implementing Data Lake with technologies like Data Factory (ADF), ADLS, Databricks, Azure Synapse Analytics, Event Hub & Streaming Analytics, Cosmos DB and Purview. Experience of using big data technologies like Hadoop (CDH or HDP), Spark, Airflow, NiFi, Kafka, Hive, HBase or MongoDB, Neo4J, Elastic Search, Impala, Sqoop etc. Strong programming & debugging skills either in Python and Scala/Java. Experience of building REST services is good to have. Experience of supporting BI and Data Science teams in consuming the data in a secure and governed manner. Good understanding and Experience of using CI/CD with Git, Jenkins Azure DevOps. Experience of setting up cloud-computing infrastructure solutions. Hands on Experience/Exposure to NoSQL Databases and Data Modelling in Hive 9+ years of technical experience with at-least 2 years on MS Azure and 2 year on Hadoop (CDH/HDP). B.Tech/B.E from reputed institute preferred.

Posted 2 months ago

Apply

7 - 12 years

0 - 3 Lacs

Hyderabad

Hybrid

Naukri logo

Hands-on programming experience using ADLS, ADF, Azure Functions, Azure SQL • Must have exp on implementing CI/CD for ADLS, ADF, Azure Functions • Azure storage options - Azure Data Lake Storage, Azure Blob Storage, Azure VM • Azure data factory - ADF Components & Basic Terminology, Azure Portal Basics & ADF Creation, Basic Pipelines, Triggers, Copy Activity, Data Transformation, Components and Mappings with DataFlows, Linked Services, Datasets, Parametrization, Integration with Different Sources, Security, Debugging and Troubleshooting • Azure Git Configurations • Must have exp on either python/pyscript • Ability to read architecture diagrams, data models, data flows • Database design and development SQL , Stored Procedures, Performance tuning, DB design & optimization • Good to have Snowflake experience. • Experience using Azure DevOps, GitHub, Visual Studio or equivalent • Emphasis on code quality, integration testing, performance testing and tuning • Hands on developing and system validation and testing of data pipelines, flows and services • Development of technical system and process documentation • Analyzing data to effectively coordinate the installation of new or the modification of existing data • Managing the data pipelines through software development lifecycle. • Monitoring data process performance post deployment until transitioned to operations • Communicating key project data to team members and building cohesion among teams. • Developing and executing project plans. 7+ years 15 days max hybrid-hyderabad

Posted 2 months ago

Apply

7 - 9 years

9 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

(US)-Data Engineer - AM - BLR - J48803 Responsibilities : Design and implement software systems with various Microsoft technologies and ensure compliance to all architecture requirements Define, design, develop and support the architecture of Tax products / Apps used across KPMG member firms by collaborating with technical and non-technical business stakeholders efficiently. Create and improve software products using design patterns, principles refactoring and development best practices. Focus on building Cloud Native Applications by revisiting the on-premises apps and co-ordinating on the Cloud Transformation journey. Provide excellent client facing service working with stakeholders across different geographical locations. Ability to work on multiple client activities in a very fast paced environment. Skills : Strong technical skills in below technologies Database: Azure SQL Cloud: Azure, Storage, IaaS, PaaS, Security, Networking, Azure Data Factory, ADLS, Synapse, Fabric. Coding : Python Strong SQL Programming & DevOps skills. Strong analytical skills, written and verbal communication skills are required. Working knowledge on software process such as source control management, change management, defect tracking and continuous integration Required Candidate profile Candidate Experience Should Be : 7 To 9 Candidate Degree Should Be : BE-Comp/IT,BE-Other,BTech-Comp/IT,BTech-Other,MBA,MCA

Posted 2 months ago

Apply

3 - 5 years

9 - 13 Lacs

Kota

Work from Office

Naukri logo

Job Description : - Proven experience in MS Azure data services - ADF, ADLS, Azure Databricks, Delta lake - Experience in Python & PySpark development - Proven experience SQL development preferably SQL Server but other DBMS may be fine - Good communication (verbal & written) & documentation skills - Good communication (verbal & written) & documentation skills - Self-starter, motivated and able to work independently while being a good team player - Good to have: knowledge on analytic models to consume unstructured and social data - Should have hands on experience in Azure Data bricks experience with Pyspark, - Strong Data warehouse knowledge, Strong SQL writing skills, Writing Store Procedure, Azure Data Factory & Azure datalake - Building a data warehouse solution in cloud in particular Azure.

Posted 2 months ago

Apply

3 - 5 years

9 - 13 Lacs

Ranchi

Work from Office

Naukri logo

Job Description : - Proven experience in MS Azure data services - ADF, ADLS, Azure Databricks, Delta lake - Experience in Python & PySpark development - Proven experience SQL development preferably SQL Server but other DBMS may be fine - Good communication (verbal & written) & documentation skills - Good communication (verbal & written) & documentation skills - Self-starter, motivated and able to work independently while being a good team player - Good to have: knowledge on analytic models to consume unstructured and social data - Should have hands on experience in Azure Data bricks experience with Pyspark, - Strong Data warehouse knowledge, Strong SQL writing skills, Writing Store Procedure, Azure Data Factory & Azure datalake - Building a data warehouse solution in cloud in particular Azure.

Posted 2 months ago

Apply

6 - 10 years

9 - 12 Lacs

Trivandrum

Hybrid

Naukri logo

Role & responsibilities Senior Data Engineer will be responsible for designing, implementing, and maintaining data solutions on the Microsoft Azure Data platform and SQL Server (SSIS, SSAS, UC4 Atomic) Collaborate with various stakeholders, and ensuring the efficient processing, storage, and retrieval of large volumes of data Technical Expertise and Responsibilities Design, build, and maintain scalable and reliable data pipelines. Should be able to design and build solutions in Azure data factory and Databricks to extract, transform and load data into different source and target systems. Should be able to design and build solutions in SSIS Should be able to analyze and understand the existing data landscape and provide recommendations/innovative ideas for rearchitecting / optimizing / streamlining to bring efficiency and scalability. Must be able to collaborate and effectively communicate with onshore counterparts to address technical gaps, requirement challenges, and other complex scenarios. Monitor and troubleshoot data systems to ensure high performance and reliability. Should be highly analytical and detail-oriented with extensive familiarity with database management principles. Optimize data processes for speed and efficiency. Ensure the data architecture supports business requirements and data governance policies. Define and execute the data engineering strategy in alignment with the companys goals. Integrate data from various sources, ensuring data quality and consistency. Stay updated with emerging technologies and industry trends. Understand the big picture business process utilizing deep knowledge in banking industry and translate them to data requirements. Enabling and running data migrations across different databases and different servers Perform thorough testing and validation to support the accuracy of data transformations and data verification used in machine learning models. Analyze data and different systems to define data requirements. Should be well versed with Data Structures & algorithms. Define data mapping working along with business and digital team and data team. Data pipeline maintenance/testing/performance validation Assemble large, complex data sets that meet functional / non-functional business requirements. Analyze and identify gaps on data needs and work with business and IT to bring in alignment on data needs. Troubleshoot and resolve technical issues as they arise. Optimize data flow and collection for cross-functional teams. Work closely with Data counterparts at onshore, product owners, and business stakeholders to understand data needs and strategies. Collaborate with IT and DevOps teams to ensure data infrastructure aligns with overall IT architecture. Implement best practices for data security and privacy. Drive continuous improvement initiatives within the data engineering function Optimize data flow and collection for cross-functional teams. Understand impact of data conversions as they pertain to servicing operations. Manage higher volume and more complex cases with accuracy and efficiency. Required Skill's: Design and develop warehouse solutions using Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, Azure Analysis Services Should be proficient in SSIS, SQL and Query optimization. Should have worked in onshore offshore model managing challenging scenarios. Expertise in working with large amounts of data (structured and unstructured), building data pipelines for ETL workloads and generate insights utilizing Data Science, Analytics. Expertise in Azure, AWS cloud services, and DevOps/CI/CD frameworks. Ability to work with ambiguity and vague requirements and transform them into deliverables. Good combination of technical and interpersonal skills with strong written and verbal communication; detail-oriented with the ability to work independently. Drive automation efforts across the data analytics team utilizing Infrastructure as Code (IaC) using Terraform, Configuration Management, and Continuous Integration (CI) / Continuous Delivery (CD) tools such as Jenkins. Help build define architecture frameworks, best practices & processes. Collaborate on Data warehouse architecture and technical design discussions. Expertise in Azure Data factory and should be familiar with building pipelines for ETL projects. Expertise in SQL knowledge and experience working with relational databases. Expertise in Python and ETL projects Experience in data bricks will be of added advantage. Should have expertise in data life cycle, data ingestion, transformation, data loading, validation, and performance tuning.

Posted 2 months ago

Apply

4 - 9 years

10 - 20 Lacs

Bengaluru

Hybrid

Naukri logo

Required Skills & Qualifications: • Good Communication skills and learning aptitude • Good understanding of Azure environment • Hands on Exp in Azure Data Lake, Azure Data Factory, Azure Synapse, Azure Databricks- • Must have hands on Apache Spark and Scala / Python programming, working with Delta Tables, Experience in Databricks is an added advantage. • Strong SQL Skills: Developing SQL Store Procedures, Functions, Dynamic SQL queries, Joins • Hands on experience in ingesting data from various data sources and data types & file types • Knowledge in Azure DevOps, understanding of build and release pipelines Good to have • Snowflake added advantage

Posted 2 months ago

Apply

8 - 13 years

20 - 35 Lacs

Bengaluru

Hybrid

Naukri logo

Job description Job Title: Azure Data Engineer Location: Bangalore-Hybrid Exp: 8+ Years Minimum Qualifications Degree in Analytics, Business Administration, Data Science or equivalent experience Creative problem solving skills Exemplary attention to detail, critical that you understand the data before finalizing a query Strong written & verbal communication skills Ensures queries are accurate (written correctly & provides the intended outcome) Skilled at creating documentation to run and troubleshoot the tools / reports you create Collaboration and networking ability 8+ years of experience using SQL & creating reports in Power BI Tools: SQL Snowflake Power BI Microsoft Excel Azure Data bricks, ADL,ADF DAX

Posted 2 months ago

Apply

10 - 15 years

12 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Job Purpose: To lead the delivery team of Banks Data Office. Should take end to end ownership of delivery of Projects. Should work as a Project Manager and/or Scrum Master to maintain the backlog in the assigned project/vertical and plan for delivery. Experience Overall experience between 10 to 15 years, applicant must have 7+ years of hard core professional experience in leading the engineering teams technically. Technical Skills Understanding of Azure Data Factory (ADF) platform especially Synapse and Azure Data Lake Storage (ADLS) Understanding of SQL queries and performance tuning Experienced and having knowledge of leading the team and managing deliveries of large team. Must have experience in scrum processes and able to work as a Product Owner or Scrum Master Good to have knowledge of one or more of the below: Power BI SSIS Databricks Python Responsibility Act as an End to End delivery lead of assigned delivery track of Banks Data Office. Will lead effort on various initiatives for the data office like onboarding, engineering and presenting the data for consumption for analytics, MIS, dashboard purposes. Will architect, design, review the development of engineering pipelines, notebooks, scripts for various projects. Support in troubleshooting any ADF issues Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Natural leadership skills Should have excellent problem-solving and time management skills Strong analytical ability Should have excellent communication skill Process oriented with flexible execution mindset. Identify, track and escalate risks in a timely manner

Posted 2 months ago

Apply

5 - 8 years

8 - 10 Lacs

Bengaluru

Work from Office

Naukri logo

Design, implement, manage and optimize data pipelines in Azure Data Factory as per customer business requirements. Design and develop SparkSQL/PySpark codes in DataBricks. Integrate different services of Azure and External Systems to implement data analytics solutions. Building CI/CD pipeline with Devops and Git lad exp Design and develop codes in Azure LogicApp, Azure Functions, Azure SQL, Synapse etc. Implement best practices in ADF / DataBricks / other Azure data engineering services / target databases to maximize job performance, ensure code reusability and minimize implementation and maintenance cost. Ingest structured / semistructures / unstructured data into ADLS/Blob Storage in batch/near real time/real time from different sources systems including RDBMSs, ERPs, File Systems, Storage Services, APIs, Event Producers, NoSQL DBs etc.

Posted 3 months ago

Apply

5 - 8 years

12 - 18 Lacs

Bengaluru, Hyderabad, Gurgaon

Work from Office

Naukri logo

Role & responsibilities Design and build data pipelines using Spark-SQL and PySpark in Azure Databricks Design and build ETL pipelines using ADF Build and maintain a Lakehouse architecture in ADLS / Databricks. Perform data preparation tasks including data cleaning, normalization, deduplication, type conversion etc. Work with DevOps team to deploy solutions in production environments. Control data processes and take corrective action when errors are identified. Corrective action may include executing a work around process and then identifying the cause and solution for data errors. Participate as a full member of the global Analytics team, providing solutions for and insights into data related items. Collaborate with your Data Science and Business Intelligence colleagues across the world to share key learnings, leverage ideas and solutions and to propagate best practices. You will lead projects that include other team members and participate in projects led by other team members. Apply change management tools including training, communication and documentation to manage upgrades, changes and data migrations. Must Have Skills: Azure Databricks Azure Data Factory PySpark Spark - SQL ADLS Good To Have Skills Change Management tool, Devops

Posted 3 months ago

Apply

4 - 7 years

12 - 15 Lacs

Pune

Remote

Naukri logo

The job involves designing backend systems, stream processors, and data pipelines using SQL, Azure, and DevOps. Responsibilities include optimizing processes, delivering insights, and leading code reviews while collaborating on Azure solutions. Required Candidate profile CS Engineer with 5 years exp as Data Engineer. Proficient in Azure big data tools (Databricks, Synapse, HDInsight, ADLS) and cloud services (VM, Databricks, SQL DB).

Posted 3 months ago

Apply

6 - 9 years

15 - 25 Lacs

Noida

Hybrid

Naukri logo

Immediate Joiners Preferred Shift:- 12:00 PM to 9:00 PM Mode:- Hybrid Mode Position Summary Someone who is responsible for the Azure Big Data platform in collaboration with vendor partner and Big Data development team. Someone with hands on working experience in Azure Data Analytics using ADLS, Delta Lakes, Cosmos DB, Python and Spark/Scala programing to facilitate a strong, robust and flexible platform. Job Responsibilities Assist Big Data development & Data science team by writing basic Python scripts, Spark & PySpark etc. Assist Big Data development and Data science team in Data Ingestion projects in Azure environment Configure and manage I-a-a-S, P-a-s-S Experience of Azure cloud components: Azure Data Factory Azure Data Lake Azure Data Catalogue Azure LogicApps & FunctionApps Azure Synapse Analytics Azure Databricks Azure EventHub Azure Functions Azure SQL DB Knowledge, Skills and Abilities Education Bachelor's degree in Computer Science, Engineering, or related discipline Experience 5 to 8 years of solutions design & development experience Experience in building Data Ingestion/Transformation pipelines on Azure Cloud Experience in BigData Tools like Spark, Delta Lakes, ADLS, Azure Synapse /Databricks Proficient understanding of distributed computing principles Knowledge and skills (general and technical) Spark, Scala, Python Azure Cosmos DB, Mongo DB Azure Data Factory ADLS-Gen2 Azure Data Lake Azure Data Catalogue Azure LogicApps & FunctionApps Azure Synapse Analytics Azure Databricks Azure EventHub Azure Functions Azure SQL DB

Posted 3 months ago

Apply

2 - 5 years

4 - 8 Lacs

Pune

Work from Office

Naukri logo

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 3 months ago

Apply

9 - 12 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Role : Sr. Azure Data engineer-Data Engineer-Data platforms-Azure Total 9-12 yrs , Relevant 4+ yrs in ADB, ADF, Python- 5 yrs in Sql * are primary and Must have Data modeling * SQL Database / SQL Scripting skill * Azure Data Lake Storage (ADLS) * Azure Data Factory (ADF) * Azure Databricks * PySpark or SparkSQL * Azure Synapse ** CI/CD ** All single star are must have, Should be able to lead a team, Hands on strong tech resource

Posted 3 months ago

Apply

10 - 17 years

20 - 35 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Work from Office

Naukri logo

Role & responsibilities - Minimum 10 years of experience in data analytics field Minimum 6 years of experience in running operation and support in Cloud Data Lakehouse environment Experience with Azure Databricks Experience in building and optimizing data pipelines, architectures and data sets Excellent experience in Scala or Python Ability to troubleshoot and optimize complex queries on the Spark platform Knowledgeable on structured and unstructured data design / modeling, data access and data storage techniques Experience with DevOps tools and environment Technical / Professional Skills - Azure Databricks Python / Scala / Java HIVE / HBase / Impala / Parquet Sqoop, Kafka, Flume SQL and RDBMS Airflow Jenkins / Bamboo Github / Bitbucket Nexus Perks and benefits

Posted 3 months ago

Apply

6 - 10 years

30 - 35 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.

Posted 3 months ago

Apply

8 - 10 years

15 - 25 Lacs

Bengaluru

Remote

Naukri logo

Looking for Data Architects Freelancers.Pre-sales Exp must.Hands on experience with distributed computing framework like DataBricks, Spark- Ecosystem (Spark Core, PySpark, Spark Streaming, SparkSQL)

Posted 3 months ago

Apply

6 - 9 years

12 - 22 Lacs

Bengaluru, Hyderabad

Hybrid

Naukri logo

Strong Data Bricks Experience creating and using machine learning algorithms and statistics: regression, clustering, decision trees, neural networks, etc. Strong understanding of supervised & unsupervised ML Techniques. Experience statistical computer languages: R, Python etc., Basic knowledge fetch data from Database like SQL or any cloud resource name it AWS / ADLS Basic knowledge on any Visualization tools like Power BI, Tableau etc., Doing ad-hoc analysis and presenting results in a clear manner. Experience working with and creating data architectures. Strong written and verbal communications skills; comfortable communicating with senior levels of both business and technology leadership. Should be proactive in driving large analytic projects and programs completion. Explore big data machine learning models. Worked with client projects.

Posted 3 months ago

Apply

7 - 10 years

22 - 25 Lacs

Pune

Work from Office

Naukri logo

Design, develop, and deploy data pipelines using Databricks, including data ingestion, transformation, and loading (ETL) processes. Develop and maintain high-quality, scalable, and maintainable Databricks notebooks using Python. Work with Delta Lake and other advanced features. Leverage Unity Catalog for data governance, access control, and data discovery. Develop and optimize data pipelines for performance and cost-effectiveness. Integrate with various data sources, including but not limited to databases and cloud storage (Azure Blob Storage, ADLS, Synapse), and APIs. Experience working with Parquet files for data storage and processing. Experience with data integration from Azure Data Factory, Azure Data Lake, and other relevant Azure services. Perform data quality checks and validation to ensure data accuracy and integrity. Troubleshoot and resolve data pipeline issues effectively. Collaborate with data analysts, business analysts, and business stakeholders to understand their data needs and translate them into technical solutions. Participate in code reviews and contribute to best practices within the team.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies