Home
Jobs
Companies
Resume

39 Datalake Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 - 12.0 years

12 - 16 Lacs

Pune

Work from Office

Naukri logo

Job Summary : We are seeking an experienced and dynamic Analytical Team Lead with expertise in Qlik SaaS, Power BI, Data Lake solutions, and AI/ML technologies. The successful candidate will lead a team of data analysts and engineers, driving business intelligence (BI) initiatives, building scalable analytics platforms, and leveraging advanced AI/ML models to uncover actionable insights. This role requires a strong balance of technical expertise, strategic thinking, and leadership skills to align data strategies with organizational goals. Key Responsibilities : 1. Leadership and Team Management :- Lead, mentor, and manage a team of data analysts, BI developers, and data engineers.- Promote collaboration and knowledge sharing within the team to foster a high-performance culture.- Ensure alignment of analytics objectives with business priorities. 2. BI and Data Visualization :- Design, implement, and manage scalable dashboards using Qlik SaaS and Power BI.- Ensure data visualization best practices to deliver clear, actionable insights.- Collaborate with stakeholders to define KPIs and build intuitive reporting solutions. 3. Data Architecture and Integration :- Oversee the design and optimization of Data Lake solutions for efficient data storage and retrieval.- Ensure seamless integration of data pipelines across multiple systems and platforms.- Partner with data engineering teams to maintain data quality, governance, and security. 4. AI/ML Implementation :- Drive the adoption of AI/ML technologies to build predictive models, automate processes, and enhance decision-making.- Collaborate with data scientists to develop and deploy machine learning models within the analytics ecosystem. 5. Agile Project Management : - Lead analytics projects using agile methodologies, ensuring timely delivery and alignment with business goals.- Act as a bridge between technical teams and business stakeholders, facilitating clear communication and expectation management.6. Stakeholder Engagement : - Work closely with business leaders to identify analytics needs and opportunities.- Provide thought leadership on data-driven strategies and innovation.Key Skills and Qualifications : Technical Expertise : 1. Proficiency in Qlik SaaS and Power BI for BI development and data visualization. 2. Strong understanding of Data Lake architectures and big data technologies (Azure Data Lake, Google big query ). 3. Hands-on experience with AI/ML frameworks and libraries4. Knowledge of programming languages such as Python, R, or SQL.Leadership and Communication :1. Proven experience in leading and mentoring teams. 2. Strong stakeholder management and communication skills to translate complex data concepts into business value.Project Management :1. Experience with agile methodologies in analytics and software development projects. Education :- Bachelor's degree (12 + 4) .Preferred Qualifications : 1. Certification in Qlik, Power BI, or cloud platforms . 2. Experience in deploying analytics solutions in hybrid or cloud environments. 3. Familiarity with DevOps, CI/CD pipelines, and MLOps processes.

Posted 1 week ago

Apply

10.0 - 15.0 years

30 - 40 Lacs

Noida, Gurugram

Work from Office

Naukri logo

We're hiring for Snowflake Data Architect - With Leading IT Services firm for Noida & Gurgaon. Job Summary: We are seeking a Snowflake Data Architect to design, implement, and optimize scalable data solutions using Databricks and the Azure ecosystem. The ideal candidate will have deep expertise in big data architecture, data engineering, and cloud technologies , enabling them to create robust, high-performance data pipelines and analytics solutions. Key Responsibilities: Design and develop scalable, secure, and high-performance data architectures using Snowflake, Databricks, Delta Lake, and Apache Spark . Architect ETL/ELT data pipelines to process structured and unstructured data efficiently. Implement data governance, security, and compliance frameworks across cloud-based data platforms. Optimize Spark jobs for performance, cost, and reliability. Collaborate with data engineers, analysts, and business teams to understand requirements and design appropriate solutions. Develop data lakehouse architectures leveraging Databricks and ADLS Implement machine learning and AI workflows using Databricks ML and integration with ML frameworks. Define and enforce best practices for data modeling, metadata management, and data quality . Monitor and troubleshoot Databricks clusters, job failures, and performance bottlenecks . Stay updated with the latest Databricks features, Apache Spark advancements, and cloud innovations . Required Qualifications: 10+ years of experience in data architecture, data engineering, or big data platforms . Hands-on experience with Snowflake is mandatory and experience on Databricks (including Delta Lake, Unity Catalog, DBSQL) is great-to-have, as an addition. Will work in Individual Contributor role with expertise in Apache Spark for large-scale data processing. Proficiency in Python, Scala, or SQL for data transformations. Experience with Azure and their data services (e.g., Azure Data Factory, Azure Synapse, Azure, Azure SQL Server ). Knowledge of data lakehouse architectures, data warehousing and ETL processes . Strong understanding of data security, IAM, and compliance best practices . Experience with CI/CD pipelines, Infrastructure as Code (Terraform, ARM templates, CloudFormation) . Familiarity with MLflow, Feature Store, and MLOps concepts is a plus. Strong interpersonal and communication skills If interested, please share your profile at harjeet@beanhr.com

Posted 1 week ago

Apply

8.0 - 13.0 years

5 - 8 Lacs

Hyderabad

Hybrid

Naukri logo

Immediate Openings on # Snowflake _ Pan India_ Contract Experience:8+ Years Skill: Snowflake Notice Period: Immediate Employment Type: Contract Work Mode: WFO/Hybrid Job Description : Snowflake Data Warehouse Lead (India - Lead 8 to 10 yrs exp): Lead the technical design and architecture of Snowflake platforms ensuring alignment with customer requirements, industry best practices, and project objectives. Conduct code reviews, ensure adherence to coding standards and best practices, and drive continuous improvement in code quality and performance Provide technical support, troubleshooting problems, and providing timely resolution for Incidents, Services Requests and Minor Enhancements as required for Snowflake platforms and its related services. Datalake and Storage management Adding, updating, or deleting datasets in Snowflake Monitoring storage usage and handling capacity planning Strong communication and presentation skills

Posted 1 week ago

Apply

5.0 - 8.0 years

5 - 8 Lacs

Bengaluru / Bangalore, Karnataka, India

On-site

Foundit logo

Senior data management/integration engineer End to end data management and engineering experience. 10+ years Very strong python programming pyspark dev experience in a large scale data ecosystem Solid foundation on SQL data management Familiarity with datalake/delta lake architectures Requirements analysis familiarity and experience interacting and engaging with business teams Retail industry experience (Fortune 1000) around supply chain and operating with large data sets is a must have.

Posted 1 week ago

Apply

8.0 - 14.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Foundit logo

Introduction A Data and AI Technology Sales Engineer role (what we internally call a, Brand Technical Specialist) within IBMs zStack brand means accelerating enterprises success by improving their ability to understand their data. It means providing solutions that enable people across organizations, in multiple roles, the ability to turn data into actionable insights without having to wait for IT. And it means solutioning and selling multi-award winning software deployed on IBM z/LinuxONE platform, and world-class design practices that enables business analysts to ask new questions. The answers to which are literally shaping the future and changing the world. Excellent onboarding and an industry leading learning culture will set you up for positive impact and success, whilst ongoing development will advance your career through an upward trajectory. Our sales environment is collaborative and experiential. Part of a team, youll be surrounded by bright minds and keen co-creators - always willing to help and be helped - as you apply passion to work that will compel our clients to invest in IBMs products and services. Your role and responsibilities Applying excellent communication and empathy, youll act as a trusted strategic advisor to some of the worlds most transformational enterprises and culturally influential brands, as they rely on your expertise and our technology to solve some of their hardest problems. With your focus on the front-end of the solution lifecycle, youll be a master at listening to stakeholders, grasping their business challenges and requirements, and forming more detailed definitions of new architectural structures that will make up their best-fit, value adding solutions. Were committed to success. In this role, your achievements will drive your career, team, and clients to thrive. A typical day may involve: Understanding client needs and aligning them with IBM Z solutions. Creating effective end-to-end architecture using IBM Z. Ensuring architectural viability and conducting assessments. Identifying and resolving project risks and conflicts. Your primary responsibilities will include: - Client Strategy Design: Creating client strategies for Data & AI infrastructure around the IBM z and LinuxONE platform - Solution Definition: Defining IBM Data & AI solutions that constitute functionalities such as Data Integration (ETL), Data Store (DB2, Oracle, MySql) and Data Science (Watson studio, Watson ML) leveraging the strengths of the IBM z and LinuxONE platform - Providing proof of concepts and simplifying complex topics for meeting clients business requirements in the area of data platform modernization and analytics. - Credibility Building: Establishing credibility and trust to facilitate architecture and solution benefits to drive revenue and technical business objectives. Required education Bachelors Degree Required technical and professional expertise Required Professional and Technical Expertise : - Minimum 8-14 years of experience in Data and AI technologies which should include infrastructure for Analytics & Advance Analytics solutions like Datalake, Data Warehouse, Business Analytics, AI, GenAI & Datafabric - Experiential selling including co-creation and hands-on technical sales methods such as: demos, custom demos, Proofs of concept, Minimum Viable Products (MVPs) or other technical proofs Build deep brand (Data & AI) expertise to assist partners to deliver PoX (Custom Demo, PoC, MVP, etc.) to progress opportunities. Identifying partners with skills, expertise, and experience Exceptional interpersonal and communication skills, and an ability to collaborate effectively with Ecosystem Partners, clients and sales professionals. Understanding of areas of Governance Risk and Controls is a bonus. Experience in AI Landscape and technologies at work across Banking/Finance Know how and technical capabilities and working experience on similar Data AI Products like Cloudera, Teradata, Oracle, Informatica, SAS, etc Preferred technical and professional experience - Knowledge of IBM Z and how it fits in the Digital Transformation (training in IBMs Z product will be provided)

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 9 Lacs

Bengaluru

Work from Office

Naukri logo

Employment Type : Full Time, Permanent Working mode : Regular Job Description : Utilizes software engineering principles to deploy and maintain fully automated data transformation pipelines that combine a large variety of storage and computation technologies to handle a distribution of data types and volumes in support of data architecture design. Key Responsibilities : A Data Engineer designs data products and data pipelines that are resilient to change, modular, flexible, scalable, reusable, and cost effective. - Design, develop, and maintain data pipelines and ETL processes using Microsoft Azure services (e.g., Azure Data Factory, Azure Synapse, Azure Databricks, Azure Fabric). - Utilize Azure data storage accounts for organizing and maintaining data pipeline outputs. (e.g., Azure Data Lake Storage Gen 2 & Azure Blob storage). - Collaborate with data scientists, data analysts, data architects and other stakeholders to understand data requirements and deliver high-quality data solutions. - Optimize data pipelines in the Azure environment for performance, scalability, and reliability. - Ensure data quality and integrity through data validation techniques and frameworks. - Develop and maintain documentation for data processes, configurations, and best practices. - Monitor and troubleshoot data pipeline issues to ensure timely resolution. - Stay current with industry trends and emerging technologies to ensure our data solutions remain cutting-edge. - Manage the CI/CD process for deploying and maintaining data solutions.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

5 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Role & responsibilities Should Coordinate with team members, Paris counterparts and work independently. Responsible & Accountable to deliver Functional Specifications, Wireframe docs, RCA, Source to Target Mapping, Test Strategy Doc & any other BA artifacts as demanded by the project delivery Understanding the business requirements, discuss with Business users. Should be able to write mapping documents from User stories. Follow project documentation standards. should have very good knowledge of Hands - on SQL. Analysis the Production data and derive KPI for business users Well verse with Jira use for project work. Preferred candidate profile 5+ years of experience in JAVA / Data based projects (Datawarehouse or Datalake) preferably in Banking Domain Able to performing Gap / Root cause analysis Hands-on Business Analysis skill with experience writing Functional Spec Able to performing Gap / Root cause analysis Should be able to convert the Business use case to Mapping sheet of Source to Target & performing Functional validation Should be able to work independently Should be able to debug prod failures, able to provide root cause solution. Having knowledge of SQL / RDBMS concepts Good analytical/ troubleshooting skills to cater the business requirements. Understanding on Agile process would be an added advantage. Effective team player with ability work autonomously and in team with cross-cultural environment. Effective verbal and written communication to work closely with all the stakeholders.

Posted 2 weeks ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill

Posted 2 weeks ago

Apply

7.0 - 10.0 years

2 - 6 Lacs

Pune

Work from Office

Naukri logo

Responsibilities : - Design, develop, and deploy data pipelines using Databricks, including data ingestion, transformation, and loading (ETL) processes. - Develop and maintain high-quality, scalable, and maintainable Databricks notebooks using Python. - Work with Delta Lake and other advanced features. - Leverage Unity Catalog for data governance, access control, and data discovery. - Develop and optimize data pipelines for performance and cost-effectiveness. - Integrate with various data sources, including but not limited to databases and cloud storage (Azure Blob Storage, ADLS, Synapse), and APIs. - Experience working with Parquet files for data storage and processing. - Experience with data integration from Azure Data Factory, Azure Data Lake, and other relevant Azure services. - Perform data quality checks and validation to ensure data accuracy and integrity. - Troubleshoot and resolve data pipeline issues effectively. - Collaborate with data analysts, business analysts, and business stakeholders to understand their data needs and translate them into technical solutions. - Participate in code reviews and contribute to best practices within the team.

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Skills Hands on experience on Databricks Spark SQL AWS Cloud platform especially S3 EMR Databricks Cloudera etc. Experience on Shell scripting Exceptionally strong analytical and problem-solving skills Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Excellent collaboration and cross functional leadership skills Excellent communication skills both written and verbal Ability to manage multiple initiatives and priorities in a fast-paced collaborative environment Ability to leverage data assets to respond to complex questions that require timely answers has working knowledge on migrating relational and dimensional databases on AWS Cloud platform Skills Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 3 weeks ago

Apply

6.0 - 11.0 years

15 - 30 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 6 - 15 Yrs Location: Pan India Job Description: Candidate must be proficient in Databricks Understands where to obtain information needed to make the appropriate decisions Demonstrates ability to break down a problem to manageable pieces and implement effective timely solutions Identifies the problem versus the symptoms Manages problems that require involvement of others to solve Reaches sound decisions quickly Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Roles Responsibilities Provides innovative and cost effective solution using databricks Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit Learn adapt quickly to new Technologies as per the business need Develop a team of Operations Excellence building tools and capabilities that the Development teams leverage to maintain high levels of performance scalability security and availability Skills The Candidate must have 710 yrs of experience in databricks delta lake Hands on experience on Azure Experience on Python scripting Relevant experience with ETL methods and with retrieving data from dimensional data models and data warehouses Strong experience with relational databases and data access methods especially SQL Knowledge of Azure architecture and design Interested can share your resume to sankarspstaffings@gmail.com with below inline details. Over All Exp : Relevant Exp : Current CTC : Expected CTC : Notice Period :

Posted 3 weeks ago

Apply

5.0 - 10.0 years

8 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Job Title : Azure Synapse Developer Position Type : Permanent Experience : 5+ Years Location : Hyderabad (Work From Office / Hybrid) Shift Timings : 2 PM to 11 PM Mode of Interview : 3 rounds (Virtual/In-person) Notice Period : Immediate to 15 days Job Description : We are looking for an experienced Azure Synapse Developer to join our growing team. The ideal candidate should have a strong background in Azure Synapse Analytics, SSRS, and Azure Data Factory (ADF), with a solid understanding of data modeling, data movement, and integration. As an Azure Synapse Developer, you will work closely with cross-functional teams to design, implement, and manage data pipelines, ensuring the smooth flow of data across platforms. The candidate must have a deep understanding of SQL and ETL processes, and ideally, some exposure to Power BI for reporting and dashboard creation. Key Responsibilities : - Develop and maintain Azure Synapse Analytics solutions, ensuring scalability, security, and performance. - Design and implement data models for efficient storage and retrieval of data in Azure Synapse. - Utilize Azure Data Factory (ADF) for ETL processes, orchestrating data movement, and integrating data from various sources. - Leverage SSIS/SSRS/SSAS to build, deploy, and maintain data integration and reporting solutions. - Write and optimize SQL queries for data manipulation, extraction, and reporting. - Collaborate with business analysts and other stakeholders to understand reporting needs and create actionable insights. - Perform performance tuning on SQL queries, pipelines, and Synapse workloads to ensure high performance. - Provide support for troubleshooting and resolving data integration and performance issues. - Assist in setting up automated data processes and create reusable templates for data integration. - Stay updated on Azure Synapse features and tools, recommending improvements to the data platform as appropriate. Required Skills & Qualifications : - 5+ years of experience as a Data Engineer or Azure Synapse Developer. - Strong proficiency in Azure Synapse Analytics (Data Warehouse, Data Lake, and Analytics). - Solid understanding and experience in data modeling for large-scale data architectures. - Expertise in SQL for writing complex queries, optimizing performance, and managing large datasets. - Hands-on experience with Azure Data Factory (ADF) for data integration, ETL processes, and pipeline creation. - SSRS (SQL Server Reporting Services) and SSIS (SQL Server Integration Services) expertise. - Power BI knowledge (basic to intermediate) for reporting and data visualization. - Familiarity with SSAS (SQL Server Analysis Services) and OLAP concepts is a plus. - Experience in troubleshooting and optimizing complex data processing tasks. - Strong communication and collaboration skills to work effectively in a team-oriented environment. - Ability to quickly adapt to new tools and technologies in the Azure ecosystem.

Posted 3 weeks ago

Apply

10 - 12 years

13 - 20 Lacs

Kolkata

Work from Office

Naukri logo

Key Responsibilities : - Understand the factories , manufacturing process , data availability and avenues for improvement - Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. - Define what data is required to create a solution and work with connectivity engineers , users to collect the data - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability - Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data - Deploy and monitor the solution - Work with data and analytics experts to strive for greater functionality in our data systems. - Work together with Data Architects and data modeling teams. Skills /Competencies : - Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry - Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process Problem Scoping/definition Skills : - Experience in problem scoping, solving, quantification - Strong analytic skills related to working with unstructured datasets. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. - Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores - Ability to foresee and identify all right data required to solve the problem Data Wrangling Skills : - Strong skill in data mining, data wrangling techniques for creating the required analytical dataset - Experience building and optimizing 'big data' data pipelines, architectures and data sets - Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes Programming Skills : - Experience with big data tools: Spark, Delta, CDC, NiFi, Kafka, etc - Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. - Experience with object-oriented languages: Scala, Java, C++ etc. Visualization Skills : - Know how of any visualization tools such as PowerBI, Tableau - Good storytelling skills to present the data in simple and meaningful manner Data Engineering Skills : - Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis - Good understanding of how to transform and connect the data of various types and form - Great numerical and analytical skills - Identify opportunities for data acquisition - Explore ways to enhance data quality and reliability - Build algorithms and prototypes - Reformulating existing frameworks to optimize their functioning. - Good understanding of optimization techniques to make the system performant for requirements.

Posted 1 month ago

Apply

10 - 12 years

13 - 20 Lacs

Chennai

Work from Office

Naukri logo

Key Responsibilities : - Understand the factories , manufacturing process , data availability and avenues for improvement - Brainstorm , together with engineering, manufacturing and quality problems that can be solved using the acquired data in the data lake platform. - Define what data is required to create a solution and work with connectivity engineers , users to collect the data - Create and maintain optimal data pipeline architecture. - Assemble large, complex data sets that meet functional / non-functional business requirements. - Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery for greater scalability - Work on data preparation, data deep dive , help engineering, process and quality to understand the process/ machine behavior more closely using available data - Deploy and monitor the solution - Work with data and analytics experts to strive for greater functionality in our data systems. - Work together with Data Architects and data modeling teams. Skills /Competencies : - Good knowledge of the business vertical with prior experience in solving different use cases in the manufacturing or similar industry - Ability to bring cross industry learning to benefit the use cases aimed at improving manufacturing process Problem Scoping/definition Skills : - Experience in problem scoping, solving, quantification - Strong analytic skills related to working with unstructured datasets. - Build processes supporting data transformation, data structures, metadata, dependency and workload management. - Working knowledge of message queuing, stream processing, and highly scalable 'big data' data stores - Ability to foresee and identify all right data required to solve the problem Data Wrangling Skills : - Strong skill in data mining, data wrangling techniques for creating the required analytical dataset - Experience building and optimizing 'big data' data pipelines, architectures and data sets - Adaptive mindset to improvise on the data challenges and employ techniques to drive desired outcomes Programming Skills : - Experience with big data tools: Spark, Delta, CDC, NiFi, Kafka, etc - Experience with relational SQL ,NoSQL databases and query languages, including oracle , Hive, sparkQL. - Experience with object-oriented languages: Scala, Java, C++ etc. Visualization Skills : - Know how of any visualization tools such as PowerBI, Tableau - Good storytelling skills to present the data in simple and meaningful manner Data Engineering Skills : - Strong skill in data analysis techniques to generate finding and insights by means of exploratory data analysis - Good understanding of how to transform and connect the data of various types and form - Great numerical and analytical skills - Identify opportunities for data acquisition - Explore ways to enhance data quality and reliability - Build algorithms and prototypes - Reformulating existing frameworks to optimize their functioning. - Good understanding of optimization techniques to make the system performant for requirements.

Posted 1 month ago

Apply

10 - 18 years

12 - 22 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Hi, We are hiring for the role of AWS Data Engineer with one of the leading organization for Bangalore & Pune. Experience - 10+ Years Location - Bangalore & Pune Ctc - Best in the industry Job Description Technical Skills PySpark coding skill Proficient in AWS Data Engineering Services Experience in Designing Data Pipeline & Data Lake If interested kindly share your resume at nupur.tyagi@mounttalent.com

Posted 1 month ago

Apply

3 - 7 years

5 - 9 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

About Emperen Technologies : Emperen Technologies is a leading consulting firm committed to delivering tangible results for clients through a relationship-driven approach. With successful implementations for Fortune 500 companies, non-profits, and startups, Emperen Technologies exemplifies a client-centric model that prioritizes values and scalable, flexible solutions. Emperen specializes in navigating complex technological landscapes, empowering clients to achieve growth and success. Role Description : Emperen Technologies is seeking a highly skilled Senior Master Data Management (MDM) Engineer to join our team on a contract basis. This is a remote position where the Senior MDM Engineer will be responsible for a variety of key tasks including data engineering, data modeling, ETL processes, data warehousing, and data analytics. The role demands a strong understanding of MDM platforms, cloud technologies, and data integration, as well as the ability to work collaboratively in a dynamic environment. Key Responsibilities : - Design, implement, and manage Master Data Management (MDM) solutions to ensure data consistency and accuracy across the organization. - Oversee the architecture and operation of data modeling, ETL processes, and data warehousing. - Develop and execute data quality strategies to maintain high-quality data in line with business needs. - Build and integrate data pipelines using Microsoft Azure, DevOps, and GitLab technologies. - Implement data governance policies and ensure compliance with data security and privacy regulations. - Collaborate with cross-functional teams to define and execute business and technical requirements. - Analyze data to support business intelligence and decision-making processes. - Provide ongoing support for data integration, ensuring smooth operation and optimal performance. - Troubleshoot and resolve technical issues related to MDM, data integration, and related processes. - Work on continuous improvements of the MDM platform and related data processes. Qualifications : Required Skills & Experience : - Proven experience in Master Data Management (MDM), with hands-on experience on platforms like Profisee MDM and Microsoft Master Data Services (MDS). - Solid experience in Microsoft Azure cloud technologies. - Expertise in DevOps processes and using GitLab for version control and deployment. - Strong background in Data Warehousing, Azure Data Lakes, and Business Intelligence (BI) tools. - Expertise in Data Governance, data architecture, data modeling, and data integration (particularly using REST APIs). - Knowledge and experience in data quality, data security, and privacy best practices. - Experience working with business stakeholders and technical teams to analyze business requirements and translate them into effective data solutions. - Basic Business Analysis Skills - Ability to assess business needs, translate those into technical requirements, and ensure alignment between data management systems and business goals. Preferred Qualifications : - Experience with big data technologies and advanced analytics platforms. - Familiarity with data integration tools such as Talend or Informatica. - Knowledge of data visualization tools such as Power BI or Tableau. - Certifications in relevant MDM, cloud technologies, or data management platforms are a plus Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

12 - 22 years

35 - 65 Lacs

Chennai

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 - 24 Yrs Location- Pan India Job Description : - Candidates should have minimum 2 Years hands on experience as Azure databricks Architect If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

10 - 18 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Experience in Synapase with pyspark Knowledge of Big Data pipelinesData Engineering Working Knowledge on MSBI stack on Azure Working Knowledge on Azure Data factory Azure Data Lake and Azure Data lake storage Handson in Visualization like PowerBI Implement endend data pipelines using cosmosAzure Data factory Should have good analytical thinking and Problem solving Good communication and coordination skills Able to work as Individual contributor Requirement Analysis CreateMaintain and Enhance Big Data Pipeline Daily status reporting interacting with Leads Version controlADOGIT CICD Marketing Campaign experiences Data Platform Product telemetry Analytical thinking Data Validation of the new streams Data quality check of the new streams Monitoring of data pipeline created in Azure Data factory updating the Tech spec and wiki page for each implementation of pipeline Updating ADO on daily basis If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

10 - 20 years

35 - 55 Lacs

Hyderabad, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 8 Yrs - 18 Yrs Location- Pan India Job Description : - Mandatory Skill: Azure ADB with Azure Data Lake Lead the architecture design and implementation of advanced analytics solutions using Azure Databricks Fabric The ideal candidate will have a deep understanding of big data technologies data engineering and cloud computing with a strong focus on Azure Databricks along with Strong SQL Work closely with business stakeholders and other IT teams to understand requirements and deliver effective solutions Oversee the endtoend implementation of data solutions ensuring alignment with business requirements and best practices Lead the development of data pipelines and ETL processes using Azure Databricks PySpark and other relevant tools Integrate Azure Databricks with other Azure services eg Azure Data Lake Azure Synapse Azure Data Factory and onpremise systems Provide technical leadership and mentorship to the data engineering team fostering a culture of continuous learning and improvement Ensure proper documentation of architecture processes and data flows while ensuring compliance with security and governance standards Ensure best practices are followed in terms of code quality data security and scalability Stay updated with the latest developments in Databricks and associated technologies to drive innovation Essential Skills Strong experience with Azure Databricks including cluster management notebook development and Delta Lake Proficiency in big data technologies eg Hadoop Spark and data processing frameworks eg PySpark Deep understanding of Azure services like Azure Data Lake Azure Synapse and Azure Data Factory Experience with ETLELT processes data warehousing and building data lakes Strong SQL skills and familiarity with NoSQL databases Experience with CICD pipelines and version control systems like Git Knowledge of cloud security best practices Soft Skills Excellent communication skills with the ability to explain complex technical concepts to nontechnical stakeholders Strong problemsolving skills and a proactive approach to identifying and resolving issues Leadership skills with the ability to manage and mentor a team of data engineers Experience Demonstrated expertise of 8 years in developing data ingestion and transformation pipelines using DatabricksSynapse notebooks and Azure Data Factory Solid understanding and handson experience with Delta tables Delta Lake and Azure Data Lake Storage Gen2 Experience in efficiently using Auto Loader and Delta Live tables for seamless data ingestion and transformation Proficiency in building and optimizing query layers using Databricks SQL Demonstrated experience integrating Databricks with Azure Synapse ADLS Gen2 and Power BI for endtoend analytics solutions Prior experience in developing optimizing and deploying Power BI reports Familiarity with modern CICD practices especially in the context of Databricks and cloudnative solutions If interested please forward your updated resume to sankarspstaffings@gmail.com / Sankar@spstaffing.in With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

11 - 20 years

20 - 35 Lacs

Hyderabad, Pune, Bengaluru

Hybrid

Naukri logo

Warm Greetings from SP Staffing Services Private Limited!! We have an urgent opening with our CMMI Level 5 client for the below position. Please send your update profile if you are interested. Relevant Experience: 11 - 20 Yrs Location- Pan India Job Description : - Minimum 2 Years hands on experience in Solution Architect ( AWS Databricks ) If interested please forward your updated resume to sankarspstaffings@gmail.com With Regards, Sankar G Sr. Executive - IT Recruitment

Posted 1 month ago

Apply

14 - 24 years

30 - 45 Lacs

Bengaluru, Hyderabad, Noida

Hybrid

Naukri logo

Dear candidate, We found your profile suitable for our current opening, please go through the below JD for better understanding of the role, Job Description : Role : Technical Architect / Senior Technical Architect Exp : 12 - 25 years Mode of work : Hybrid Model Work Location : Hyderabad/Bangalore/Noida/Pune/Kolkata Job Summary: We are seeking an experienced Azure Data Platform Architect to design and lead the development of modern, scalable, cloud-native data platforms leveraging Microsoft Azure services. The ideal candidate will have deep expertise in Azures data ecosystem, strong data architecture skills, and hands-on experience in SQL and data analysis . Knowledge of AI/ML and Generative AI (GenAI) concepts is a plus , but the core focus is on building robust data pipelines, data lakes, real-time streaming architectures, and analytics capabilities Key Responsibilities: Platform Architecture & Design Design and architect end-to-end data platforms on Microsoft Azure, balancing performance, scalability, and cost optimization. Lead the design of data ingestion pipelines supporting batch, real-time, and event-driven data processing. Define data modeling strategies for raw and curated datasets in Data Lake environments. Ensure architecture aligns with enterprise data governance, security, and compliance standards. Azure Data Platform Implementation Build and optimize data pipelines and data engineering workflows using:Azure Data Factory, Azure Databricks,Azure Data Lake Storage Gen2, Azure Synapse Analytics,Azure Stream Analytics,Azure IoT Hub / Telemetry Ingestion (if applicable) ,Azure Logic Apps / Function Apps Support event-driven processing and complex event stream analytics. Integrate with enterprise systems, APIs, and external data sources. Data Analysis, Governance, and Security Perform advanced SQL querying and data analysis to support business insights, model validation, and platform optimization. Implement robust data governance, data quality frameworks, and metadata management. Ensure secure data platform operations, including encryption, role-based access controls, and secrets management using Azure Key Vault . Set up monitoring, logging, and alerting for operational health and reliability. Collaboration & Leadership Act as the data architecture lead, working closely with engineering teams, data scientists, and business stakeholders. Provide best practices and technical leadership for data modeling, pipeline development, and analytics enablement. Mentor team members and guide project delivery with a focus on engineering excellence. Required Skills & Experience: 12+ years of experience in data engineering, architecture, or related fields. 5+ years of experience designing and implementing Azure-based data platforms. Hands-on expertise with core Azure services: Azure Data Factory, Databricks, Synapse, Data Lake Storage Gen2 Real-time streaming tools like Azure Stream Analytics Azure Logic Apps / Function Apps Strong SQL skills and experience performing data analysis directly on large datasets. Proficiency in Python, PySpark , or equivalent programming languages. Deep understanding of data modeling, data governance ,and cloud security best practices . Experience in CI/CD pipelines and DevOps practices for data platforms. Nice to Have: Familiarity with AI/ML pipelines and basic knowledge of Generative AI (GenAI) use cases such as: AI-driven summarization Natural language query over data Advanced predictive analytics Experience integrating Azure Machine Learning (Azure ML) or similar frameworks. Exposure to LLMs (Large Language Models) and modern GenAI platforms is a plus. Knowledge of data visualization tools like Power BI or equivalent. Please check below link for organisation details, https://www.tavant.com/ If interested , please drop your resume to dasari.gowri@tavant.com Regards Dasari Krishna Gowri Associate Manager - HR www.tavant.com

Posted 2 months ago

Apply

3 - 6 years

8 - 14 Lacs

Visakhapatnam

Work from Office

Naukri logo

- Ensure security standards are followed for all structured and unstructured data platforms (e.g., Azure/AWS Blob Storage, data lakes, data warehouses, etc.) - Ensure security standards are followed and implemented for all data pipeline, Data Science and BI projects conducted by the team - Identify and drive implementation of database protection tools to detect and prevent unauthorized access to Worley's data platforms. - Design, develop, test, customize and troubleshoot Database security systems and solutions, such as Database Activity Monitoring, Data Obfuscation, Data segregation/segmentation. - Outline access, encryption, and logging requirements in data stores and work with data solutions and data delivery teams to implement. - Build systemic, technical controls for data discovery, classification and tagging sensitive information in structured and unstructured data stores. - Provide security expertise and consulting to data solution and delivery teams. - Work alongside the Worley Security team, to help remediate security events/incidents. - Collaborate with Worley Security team to ensure successful completion of our roadmaps and initiatives. - Integrate security testing and controls into different phases of Data Delivery development lifecycles. - Experience in working in cloud data platforms - Experience in Information Security - Experience in database administration, database management. - Experience in cloud technology built on Azure/AWS and or snowflake - Knowledge of data architecture and database technologies - Experience with data science, machine learning anomaly detection - Experience working with vendors and developing security requirements and recommendations based on evaluation of technology.

Posted 2 months ago

Apply

3 - 6 years

8 - 14 Lacs

Bareilly

Work from Office

Naukri logo

- Ensure security standards are followed for all structured and unstructured data platforms (e.g., Azure/AWS Blob Storage, data lakes, data warehouses, etc.) - Ensure security standards are followed and implemented for all data pipeline, Data Science and BI projects conducted by the team - Identify and drive implementation of database protection tools to detect and prevent unauthorized access to Worley's data platforms. - Design, develop, test, customize and troubleshoot Database security systems and solutions, such as Database Activity Monitoring, Data Obfuscation, Data segregation/segmentation. - Outline access, encryption, and logging requirements in data stores and work with data solutions and data delivery teams to implement. - Build systemic, technical controls for data discovery, classification and tagging sensitive information in structured and unstructured data stores. - Provide security expertise and consulting to data solution and delivery teams. - Work alongside the Worley Security team, to help remediate security events/incidents. - Collaborate with Worley Security team to ensure successful completion of our roadmaps and initiatives. - Integrate security testing and controls into different phases of Data Delivery development lifecycles. - Experience in working in cloud data platforms - Experience in Information Security - Experience in database administration, database management. - Experience in cloud technology built on Azure/AWS and or snowflake - Knowledge of data architecture and database technologies - Experience with data science, machine learning anomaly detection - Experience working with vendors and developing security requirements and recommendations based on evaluation of technology.

Posted 2 months ago

Apply

4 - 9 years

7 - 11 Lacs

Hubli

Work from Office

Naukri logo

- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure SynapseAzure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHubIOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill

Posted 2 months ago

Apply

4 - 9 years

8 - 14 Lacs

Raipur

Work from Office

Naukri logo

- Strong Power BI experience - Strong Kusto, Azure Data Lake experience - SQL Server - Scripting Language : Scope / U-SQL - Should have experience in DAX - Excellent visualization and formatting skills - Good Experience in Data Analysis and Analytics - Data Modeling Experience - Excellent visualization and formatting skills - Should have strong knowledge in T-SQL - Experience in wring complex queries - ETL Awareness - Good to have Kusto Query Language - Good to have knowledge in Azure Storage - Excellent Communication and Presentation skills

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies