Home
Jobs

310 Data Lake Jobs - Page 6

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

10 - 20 Lacs

Nagpur, Pune

Work from Office

Naukri logo

We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Nagpur, Pune, Gurugram

Work from Office

Naukri logo

We are looking for a skilled Data Engineer to design, build, and manage scalable data pipelines and ensure high-quality, secure, and reliable data infrastructure across our cloud and on-prem platforms.

Posted 2 weeks ago

Apply

5.0 - 8.0 years

15 - 27 Lacs

Chennai

Hybrid

Naukri logo

Key Responsibilities: Technical Skills: Strong proficiency in SQL for data manipulation and querying. Knowledge of Python scripting for data processing and automation. Experience in Reltio Integration Hub (RIH) and handling API-based integrations. Familiarity with Data Modelling Matching, Survivorship concepts and methodologies. Experience with D&B, ZoomInfo, and Salesforce connectors for data enrichment. Understanding of MDM workflow configurations and role-based data governance Experience with AWS Databricks, Data Lake and Warehouse Implement and configure MDM solutions using Reltio while ensuring alignment with business requirements and best practices. Develop and maintain data models, workflows, and business rules within the MDM platform. Work on Reltio Workflow (DCR Workflow & Custom Workflow) to manage data approvals and role-based assignments. Support data integration efforts using Reltio Integration Hub (RIH) to facilitate data movement across multiple systems. Develop ETL pipelines using SQL, Python, and integration tools to extract, transform, and load (ETL) data. Work with D&B, ZoomInfo, and Salesforce connectors for data enrichment and integration. Perform data analysis and profiling to identify data quality issues and recommend solutions for data cleansing and enrichment. Collaborate with stakeholders to define and document data governance policies, procedures, and standards. Optimize MDM workflows to enhance data stewardship and governance.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

12 - 17 Lacs

New Delhi, Chennai, Bengaluru

Hybrid

Naukri logo

Your day at NTT DATA We are seeking an experienced Data Architect to join our team in designing and delivering innovative data solutions to clients. The successful candidate will be responsible for architecting, developing, and implementing data management solutions and data architectures for various industries. This role requires strong technical expertise, excellent problem-solving skills, and the ability to work effectively with clients and internal teams to design and deploy scalable, secure, and efficient data solutions. What you'll be doing We are seeking an experienced Data Architect to join our team in designing and delivering innovative data solutions to clients. The successful candidate will be responsible for architecting, developing, and implementing data management solutions and data architectures for various industries. This role requires strong technical expertise, excellent problem-solving skills, and the ability to work effectively with clients and internal teams to design and deploy scalable, secure, and efficient data solutions. Experience and Leadership: Proven experience in data architecture, with a recent role as a Lead Data Solutions Architect, or a similar senior position in the field. Proven experience in leading architectural design and strategy for complex data solutions and then overseeing their delivery. Experience in consulting roles, delivering custom data architecture solutions across various industries. Architectural Expertise: Strong expertise in designing and overseeing delivery of data streaming and event-driven architectures, with a focus on Kafka and Confluent platforms. In-depth knowledge in architecting and implementing data lakes and lakehouse platforms, including experience with Databricks and Unity Catalog. Proficiency in conceptualising and applying Data Mesh and Data Fabric architectural patterns. Experience in developing data product strategies, with a strong inclination towards a product-led approach in data solution architecture. Extensive familiarity with cloud data architecture on platforms such as AWS, Azure, GCP, and Snowflake. Understanding of cloud platform infrastructure and its impact on data architecture. Data Technology Skills: A solid understanding of big data technologies such as Apache Spark, and knowledge of Hadoop ecosystems. Knowledge of programming languages such as Python or R is beneficial. Exposure to ETL/ ELT processes, SQL, NoSQL databases is a nice-to-have, providing a well-rounded background. Experience with data visualization tools and DevOps principles/tools is advantageous. Familiarity with machine learning and AI concepts, particularly in how they integrate into data architectures. Design and Lifecycle Management: Proven background in designing modern, scalable, and robust data architectures. Comprehensive grasp of the data architecture lifecycle, from concept to deployment and consumption. Data Management and Governance: Strong knowledge of data management principles and best practices, including data governance frameworks. Experience with data security and compliance regulations (GDPR, CCPA, HIPAA, etc.) Leadership and Communication: Exceptional leadership skills to manage and guide a team of architects and technical experts. Excellent communication and interpersonal skills, with a proven ability to influence architectural decisions with clients and guide best practices Project and Stakeholder Management: Experience with agile methodologies (e.g. SAFe, Scrum, Kanban) in the context of architectural projects. Ability to manage project budgets, timelines, and resources, maintaining focus on architectural deliverables. Location: Delhi or Bangalore Workplace type : Hybrid Working

Posted 2 weeks ago

Apply

6.0 - 10.0 years

7 - 14 Lacs

Bengaluru

Hybrid

Naukri logo

Roles and Responsibilities Architect and incorporate an effective Data framework enabling end to end Data Solution. Understand business needs, use cases and drivers for insights and translate them into detailed technical specifications. Create epics, features and user stories with clear acceptance criteria for execution and delivery by the data engineering team. Create scalable and robust data solution designs that incorporate governance, security and compliance aspects. Develop and maintain logical and physical data models and work closely with data engineers, data analysts and data testers for successful implementation of them. Analyze, assess and design data integration strategies across various sources and platforms. Create project plans and timelines while monitoring and mitigating risks and controlling progress of the project. Conduct daily scrum with the team with a clear focus on meeting sprint goals and timely resolution of impediments. Act as a liaison between technical teams and business stakeholders and ensure. Guide and mentor the team for best practices on Data solutions and delivery frameworks. Actively work, facilitate and support the stakeholders/ clients to complete User Acceptance Testing ensure there is strong adoption of the data products after the launch. Defining and measuring KPIs/KRA for feature(s) and ensuring the Data roadmap is verified through measurable outcomes Prerequisites 5 to 8 years of professional, hands on experience building end to end Data Solution on Cloud based Data Platforms including 2+ years working in a Data Architect role. Proven hands on experience in building pipelines for Data Lakes, Data Lake Houses, Data Warehouses and Data Visualization solutions Sound understanding of modern Data technologies like Databricks, Snowflake, Data Mesh and Data Fabric. Experience in managing Data Life Cycle in a fast-paced, Agile / Scrum environment. Excellent spoken and written communication, receptive listening skills, and ability to convey complex ideas in a clear, concise fashion to technical and non-technical audiences Ability to collaborate and work effectively with cross functional teams, project stakeholders and end users for quality deliverables withing stipulated timelines Ability to manage, coach and mentor a team of Data Engineers, Data Testers and Data Analysts. Strong process driver with expertise in Agile/Scrum framework on tools like Azure DevOps, Jira or Confluence Exposure to Machine Learning, Gen AI and modern AI based solutions. Experience Technical Lead Data Analytics with 6+ years of overall experience out of which 2+ years is on Data architecture. Education Engineering degree from a Tier 1 institute preferred. Compensation The compensation structure will be as per industry standards

Posted 2 weeks ago

Apply

12.0 - 18.0 years

40 - 45 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

Experience in Hadoop, GCP/AWS/Azure Cloud ETL technologies on Cloud like Spark, Pyspark/Scala, Dataflow ETL tools like Informatica/DataStage/OWB/Talend Experience inS3, Cloud Storage, Athena, Glue, Sqoop, Flume, Hive, Kafka, Pub-Sub Required Candidate profile More than 10 years of experience in Technical, Solutioning, and Analytical roles. 5+ years of experience in building and managing Data Lakes, Data Warehouse, Data Integration,

Posted 2 weeks ago

Apply

10.0 - 15.0 years

11 - 15 Lacs

Hyderabad, Coimbatore

Work from Office

Naukri logo

Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets.

Posted 2 weeks ago

Apply

1.0 - 3.0 years

3 - 6 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Locations - Pune/Bangalore/Hyderabad/Indore Contract duration- 6 months Responsibilities Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms. Implement business and IT data requirements through new data strategies and designs across all data platforms (relational & dimensional - MUST and NoSQL-optional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Must have Payments Background Skills Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. Experience in team management, communication, and presentation. Experience with Erwin, Visio or any other relevant tool.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

19 - 27 Lacs

Haryana

Work from Office

Naukri logo

About Company Job Description Key responsibilities: 1. Understand, implement, and automate ETL pipelines with better industry standards 2. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, design infrastructure for greater scalability, etc 3. Developing, integrating, testing, and maintaining existing and new applications 4. Design, and create data pipelines (data lake / data warehouses) for real world energy analytical solutions 5. Expert-level proficiency in Python (preferred) for automating everyday tasks 6. Strong understanding and experience in distributed computing frameworks, particularly Spark, Spark-SQL, Kafka, Spark Streaming, Hive, Azure Databricks etc 7. Limited experience in using other leading cloud platforms preferably Azure. 8. Hands on experience on Azure data factory, logic app, Analysis service, Azure blob storage etc. 9. Ability to work in a team in an agile setting, familiarity with JIRA and clear understanding of how Git works 10. Must have 5-7 years of experience

Posted 2 weeks ago

Apply

5.0 - 10.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

As an Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise We are seeking a skilled Azure Data Engineer with 5+ years of experience Including 3+ years of hands-on experience with ADF/Databricks The ideal candidate Data bricks,Data Lake, Phyton programming skills. The candidate will also have experience for deploying to data bricks. Familiarity with Azure Data Factory Preferred technical and professional experience Good communication skills. 3+ years of experience with ADF/DB/DataLake. Ability to communicate results to technical and non-technical audiences

Posted 2 weeks ago

Apply

3.0 - 5.0 years

6 - 8 Lacs

Chandigarh

Work from Office

Naukri logo

Role Overview We are seeking a talented ETL Engineer to design, implement, and maintain end-to-end data ingestion and transformation pipelines in Google Cloud Platform (GCP). This role will collaborate closely with data architects, analysts, and BI developers to ensure high-quality, performant data delivery into BigQuery and downstream Power BI reporting layers. Key Responsibilities Data Ingestion & Landing Architect and implement landing zones in Cloud Storage for raw data. Manage buckets/objects and handle diverse file formats (Parquet, Avro, CSV, JSON, ORC). ETL Pipeline Development Build and orchestrate extraction, transformation, and loading workflows using Cloud Data Fusion. Leverage Data Fusion Wrangler for data cleansing, filtering, imputation, normalization, type conversion, splitting, joining, sorting, union, pivot/unpivot, and format adjustments. Data Modeling Design and maintain fact and dimension tables using Star and Snowflake schemas. Collaborate on semantic layer definitions to support downstream reporting. Load & Orchestration Load curated datasets into BigQuery across different zones (raw, staging, curated). Develop SQL-based orchestration and transformation within BigQuery (scheduled queries, scripting). Performance & Quality Optimize ETL jobs for throughput, cost, and reliability. Implement monitoring, error handling, and data quality checks. Collaboration & Documentation Work with data analysts and BI developers to understand requirements and ensure data readiness for Power BI. Maintain clear documentation of pipeline designs, data lineage, and operational runbooks. Required Skills & Experience Bachelors degree in Computer Science, Engineering, or related field. 3+ years of hands-on experience building ETL pipelines in GCP. Proficiency with Cloud Data Fusion , including Wrangler transformations. Strong command of SQL , including performance tuning in BigQuery. Experience managing Cloud Storage buckets and handling Parquet, Avro, CSV, JSON, and ORC formats. Solid understanding of dimensional modeling: fact vs. dimension tables, Star and Snowflake schemas. Familiarity with BigQuery data zones (raw, staging, curated) and dataset organization. Experience with scheduling and orchestration tools (Cloud Composer, Airflow, or BigQuery scheduled queries). Excellent problem-solving skills and attention to detail. Preferred (Good to Have) Exposure to Power BI data modeling and DAX. Experience with other GCP services (Dataflow, Dataproc). Familiarity with Git, CI/CD pipelines, and infrastructure as code (Terraform). Knowledge of Python for custom transformations or orchestration scripts. Understanding of data governance best practices and metadata management.

Posted 2 weeks ago

Apply

8.0 - 12.0 years

10 - 20 Lacs

Mumbai, Pune

Hybrid

Naukri logo

Role: Cloud Data Architect Experience: 8yrs-12yrs Location: Mumbai / Pune As a Cloud Data Architect, you will design, implement, and evangelize scalable, secure data architectures in a cloud environment. In addition to driving technical excellence in client delivery, you will collaborate with our sales and pre-sales teams to develop reusable assets, accelerators, and artifacts that support RFP responses and help win new business. Role Summary • Architect, design, and deploy end-to-end cloud data solutions while also serving as a technical advisor in sales engagements. • Create accelerators, solution blueprints, and artifacts that can be leveraged for client proposals and RFP responses. • Collaborate across multiple teams to ensure seamless integration of technical solutions with client delivery and business goals. Key Skills / Technologies • Must-Have: o Cloud Platforms (AWS, Azure, or Google Cloud) o Data Warehousing & Data Lakes (Redshift, BigQuery, Snowflake, etc.) o Big Data Technologies (Hadoop, Spark, Kafka) o SQL & NoSQL databases o ETL/ELT tools and pipelines o Data Modeling & Architecture design • Good-to-Have: o Infrastructure-as-Code (Terraform, CloudFormation) o Containerization & orchestration (Docker, Kubernetes) o Programming languages (Python, Java, Scala) o Data Governance and Security best practices Responsibilities • Technical Architecture & Delivery: o Design and build cloud-based data platforms, including data lakes, data warehouses, and real-time data pipelines. o Ensure data quality, consistency, and security across all systems. o Work closely with cross-functional teams to integrate diverse data sources into a cohesive architecture. • Sales & Pre-Sales Support: o Develop technical assets, accelerators, and reference architectures that support RFP responses and sales proposals. o Collaborate with sales teams to articulate technical solutions and demonstrate value to prospective clients. o Present technical roadmaps and participate in client meetings to support business development efforts. • Cross-Functional Collaboration: o Serve as a liaison between technical delivery teams and sales teams, ensuring alignment of strategies and seamless client hand-offs. o Mentor team members and lead technical discussions on architecture best practices. Required Qualifications • Bachelors or Masters degree in Computer Science, Information Systems, or a related field. • 8+ years of experience in data architecture/design within cloud environments, with a track record of supporting pre-sales initiatives. • Proven experience with cloud data platforms and big data technologies. • Strong analytical, problem-solving, and communication skills. Why Join Us • Influence both our technical roadmap and sales strategy by shaping cuttingedge, cloud-first data solutions. • Work with a dynamic, multi-disciplinary team and gain exposure to high-profile client engagements. • Enjoy a culture of innovation, professional growth, and collaboration with competitive compensation and benefits.

Posted 2 weeks ago

Apply

4.0 - 7.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Naukri logo

With 4 9 Years of Experience Build ETL ELT pipelines with Azure Data Factory Azure Databricks Spark Azure Data Lake Azure SQL Database and Synapse Minimum 3 Years of Hands-on experience developing Solid knowledge of Data Modelling Relational Databases and BI and Data Warehousing Demonstrated expertise in SQL Good to have experience with CICD Cloud architectures NoSQL Databases Azure Analysis Services and Power BI Working knowledge or experience in Agile, DevOps Good written and verbal communication skills English Ability to work with geographically diverse teams via collaborative technologies

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 weeks ago

Apply

4.0 - 9.0 years

12 - 22 Lacs

Gurugram, Bengaluru

Work from Office

Naukri logo

To Apply - Submit Details via Google Form - https://forms.gle/8SUxUV2cikzjvKzD9 As a Senior Consultant in our Consulting team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced AWS Data Engineers to design, implement, and maintain robust data pipelines and analytics solutions using AWS services. The ideal candidate will have a strong background in AWS data services, big data technologies, and programming languages. Role & responsibilities 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies

Posted 2 weeks ago

Apply

4.0 - 8.0 years

15 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

Senior Data Engineer Cloud & Modern Data Architectures Role Overview: We are looking for a Senior Data Engineer with expertise in ETL/ELT, Data Engineering, Data Warehousing, Data Lakes, Data Mesh, and Data Fabric architectures . The ideal candidate should have hands-on experience in at least one or two cloud data platforms (AWS, GCP, Azure, Snowflake, or Databricks) and a strong foundation in building PoCs, mentoring freshers, and contributing to accelerators and IPs. Must-Have: 5-8 years of experience in Data Engineering & Cloud Data Services . Hands-on with AWS (Redshift, Glue), GCP (BigQuery, Dataflow), Azure (Synapse, Data Factory), Snowflake, Databricks . Strong SQL, Python, or Scala skills. Knowledge of Data Mesh & Data Fabric principles . Nice-to-Have: Exposure to MLOps, AI integrations, and Terraform/Kubernetes for DataOps . Contributions to open-source, accelerators, or internal data frameworks Interested candidates share cv to dikshith.nalapatla@motivitylabs.com with below mention details for quick response. Total Experience: Relevant DE Experience : SQL Experience : SQL Rating out of 5 : Python Experience: Do you have experience in any 2 clouds(yes/no): Mention the cloud experience you have(Aws, Azure,GCP): Current Role / Skillset: Current CTC: Fixed: Payroll Company(Name): Client Company(Name): Expected CTC: Official Notice Period: (if it negotiable kindly mention up to how many days) Serving Notice (Yes / No): CTC of offer in hand: Last Working Day (in current organization): Location of the Offer in hand: ************* 5 DAYS WORK FROM OFFICE ****************

Posted 2 weeks ago

Apply

9.0 - 10.0 years

12 - 14 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities: * Design, develop & maintain data pipelines using Airflow/Data Flow/Data Lake * Optimize performance & scalability of ETL processes with SQL & Python

Posted 2 weeks ago

Apply

6.0 - 10.0 years

20 - 30 Lacs

Pune, Ahmedabad, Mumbai (All Areas)

Hybrid

Naukri logo

Must be an expert in SQL, Data Lake, Azure Data Factory, Azure Synapse, ETL, Databricks Must be an expert in data modeling, writing complex queries in SQL, Stored procedures / PySpark. Notebooks for handling complex data transformation Required Candidate profile - Ability to convert SQL code to PySpark - Strong programming skills in languages such as SQL, Python - Exp with data modelling, data warehousing & dimensional modelling concept

Posted 3 weeks ago

Apply

8.0 - 12.0 years

25 - 30 Lacs

Chennai

Work from Office

Naukri logo

Job description Job Title: Manager Data Engineer - Azure Location: Chennai (On-site) Experience: 8 - 12 years Employment Type: Full-Time About the Role We are seeking a highly skilled Senior Azure Data Solutions Architect to design and implement scalable, secure, and efficient data solutions supporting enterprise-wide analytics and business intelligence initiatives. You will lead the architecture of modern data platforms, drive cloud migration, and collaborate with cross-functional teams to deliver robust Azure-based solutions. Key Responsibilities Architect and implement end-to-end data solutions using Azure services (Data Factory, Databricks, Data Lake, Synapse, Cosmos DB Design robust and scalable data models, including relational, dimensional, and NoSQL schemas. Develop and optimize ETL/ELT pipelines and data lakes using Azure Data Factory, Databricks, and open formats such as Delta and Iceberg. Integrate data governance, quality, and security best practices into all architecture designs. Support analytics and machine learning initiatives through structured data pipelines and platforms. Collaborate with data engineers, analysts, data scientists, and business stakeholders to align solutions with business needs. Drive CI/CD integration with Databricks using Azure DevOps and tools like DBT. Monitor system performance, troubleshoot issues, and optimize data infrastructure for efficiency and reliability. Stay current with Azure platform advancements and recommend improvements. Required Skills & Experience Extensive hands-on experience with Azure services: Data Factory, Databricks, Data Lake, Azure SQL, Cosmos DB, Synapse. • Expertise in data modeling and design (relational, dimensional, NoSQL). • Proven experience with ETL/ELT processes, data lakes, and modern lakehouse architectures. • Proficiency in Python, SQL, Scala, and/or Java. • Strong knowledge of data governance, security, and compliance frameworks. • Experience with CI/CD, Azure DevOps, and infrastructure as code (Terraform or ARM templates). • Familiarity with BI and analytics tools such as Power BI or Tableau. • Excellent communication, collaboration, and stakeholder management skills. • Bachelors degree in Computer Science, Engineering, Information Systems, or related field. Preferred Qualifications • Experience in regulated industries (finance, healthcare, etc.). • Familiarity with data cataloging, metadata management, and machine learning integration. • Leadership experience guiding teams and presenting architectural strategies to leadership. Why Join Us? • Work on cutting-edge cloud data platforms in a collaborative, innovative environment. • Lead strategic data initiatives that impact enterprise-wide decision-making. • Competitive compensation and opportunities for professional growth.

Posted 3 weeks ago

Apply

8.0 - 13.0 years

11 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Overview As a Data Modeler & Functional Data Senior Analyst, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architecture teams. As a member of the Data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like Master Data, Finance, Revenue Management, Supply chain, Manufacturing and Logistics. The primary responsibility of this role is to work with Data Product Owners, Data Management Owners, and Data Engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs Data Design/Modeling documentation of metadata (business definitions of entities and attributes) and construction of database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to existing, or new, applications/reporting. Lead and Support assigned project contractors (both on & offshore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of enhancements or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for proper management ofbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Analyze/profile source data and identify issues that impact accuracy, completeness, consistency, integrity, timeliness and validity. Create Source to Target Mapping documents including identifying and documenting data transformations. Assume accountability and responsibility for assigned product delivery, be flexible and able to work with ambiguity, changing priorities, tight timelines and critical situations/issues. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications BA or BS degree required in Data Science/Management/Engineering, Business Analytics, Information Systems, Software Engineering or related Technology Discipline. 8+ years of overall technology experience that includes at least 4+ years of Data Modeling and Systems Architecture/Integration. 3+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 4+ years of experience developing Enterprise Data Models. 3+ years of Functional experience with SAP Master Data Governance (MDG) including use of T-Codes to create/update records and query tables. Extensive knowledge of all core Master Data tables, Reference Tables, IDoc structures. 3+ years of experience with Customer & Supplier Master Data. Strong SQL skills with ability to understand and write complex queries. Strong understanding of Data Life cycle, Integration and Master Data Management principles. Excellent verbal and written communication and collaboration skills. Strong Excel skills for data analysis and manipulation. Strong analytical and problem-solving skills. Expertise in Data Modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Experience mapping disparate data sources into a common canonical model. Differentiating Competencies Experience with metadata management, data lineage and data glossaries. Experience with Azure Data Factory, Databricks and Azure Machine learning. Familiarity with business intelligence tools (such as Power BI). CPG industry experience. Experience with Material, Location, Finance, Supply Chain, Logistics, Manufacturing & Revenue Management Data.

Posted 3 weeks ago

Apply

5.0 - 10.0 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Overview As an Analyst, Data Modeler, your focus would be to partner with D&A Data Foundation team members to create data models for Global projects. This would include independently analysing project data needs, identifying data storage and integration needs/issues, and driving opportunities for data model reuse, satisfying project requirements. Role will advocate Enterprise Architecture, Data Design, and D&A standards, and best practices. You will be performing all aspects of Data Modeling working closely with Data Governance, Data Engineering and Data Architects teams. As a member of the data Modeling team, you will create data models for very large and complex data applications in public cloud environments directly impacting the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. The primary responsibilities of this role are to work with data product owners, data management owners, and data engineering teams to create physical and logical data models with an extensible philosophy to support future, unknown use cases with minimal rework. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. You will establish data design patterns that will drive flexible, scalable, and efficient data models to maximize value and reuse. Responsibilities Complete conceptual, logical and physical data models for any supported platform, including SQL Data Warehouse, EMR, Spark, Data Bricks, Snowflake, Azure Synapse or other Cloud data warehousing technologies. Governs data design/modeling documentation of metadata (business definitions of entities and attributes) and constructions database objects, for baseline and investment funded projects, as assigned. Provides and/or supports data analysis, requirements gathering, solution development, and design reviews for enhancements to, or new, applications/reporting. Supports assigned project contractors (both on- & off-shore), orienting new contractors to standards, best practices, and tools. Contributes to project cost estimates, working with senior members of team to evaluate the size and complexity of the changes or new development. Ensure physical and logical data models are designed with an extensible philosophy to support future, unknown use cases with minimal rework. Develop a deep understanding of the business domain and enterprise technology inventory to craft a solution roadmap that achieves business objectives, maximizes reuse. Partner with IT, data engineering and other teams to ensure the enterprise data model incorporates key dimensions needed for the proper managementbusiness and financial policies, security, local-market regulatory rules, consumer privacy by design principles (PII management) and all linked across fundamental identity foundations. Drive collaborative reviews of design, code, data, security features implementation performed by data engineers to drive data product development. Assist with data planning, sourcing, collection, profiling, and transformation. Create Source To Target Mappings for ETL and BI developers. Show expertise for data at all levelslow-latency, relational, and unstructured data stores; analytical and data lakes; data streaming (consumption/production), data in-transit. Develop reusable data models based on cloud-centric, code-first approaches to data management and cleansing. Partner with the Data Governance team to standardize their classification of unstructured data into standard structures for data discovery and action by business customers and stakeholders. Support data lineage and mapping of source system data to canonical data stores for research, analysis and productization. Qualifications Bachelors degree required in Computer Science, Data Management/Analytics/Science, Information Systems, Software Engineering or related Technology Discipline. 5+ years of overall technology experience that includes at least 2+ years of data modeling and systems architecture. Around 2+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 2+ years of experience developing enterprise data models. Experience in building solutions in the retail or in the supply chain space. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models). Experience with integration of multi cloud services (Azure) with on-premises technologies. Experience with data profiling and data quality tools like Apache Griffin, Deequ, and Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse, Teradata or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience with Azure Data Factory, Databricks and Azure Machine learning is a plus. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including Dev Ops and Data Ops concepts. Familiarity with business intelligence tools (such as Power BI). Excellent verbal and written communication and collaboration skills.

Posted 3 weeks ago

Apply

6.0 - 8.0 years

20 - 30 Lacs

Pune

Hybrid

Naukri logo

SAP MDG/SAP ECC experience (T codes, Tables structures etc) Azure Data lake /AWS/Data Bricks Leveraging SAP MDG/ECCs experience the candidate is able to deep dive to do root cause analysis for assigned use cases. Also able to work with Azure data lake (via data Bricks) using SQL/Python. Data Model (Conceptual and Physical) will be needed to be identified and built that provides automated mechanism to monitor on going DQ issues. Multiple workshops may also be needed to work through various options and identifying the one that is most efficient and effective Works with business (Data Owners/Data Stewards) to profile data for exposing patterns indicating data quality issues. Also is able to identify impact to specific CDEs deemed important for each individual business. Identifies financial impacts of Data Quality Issue. Also can identify business benefit (quantitative/qualitative) from a remediation standpoint along with managing implementation timelines. Schedules regular working groups with business that have identified DQ issues and ensures progression for RCA/Remediation or for presenting in DGFs Identifies business DQ rules basis which KPIs/Measures are stood up that feed into the dashboarding/workflows for BAU monitoring. Red flags are raised and investigated Understanding of Data Quality value chain, starting with Critical Data Element concepts, Data Quality Issues, Data Quality KPIs/Measures is needed. Also has experience owing and executing Data Quality Issue assessments to aid improvements to operational process and BAU initiatives Highlights risk/hidden DQ issues to Lead/Manager for further guidance/escalation Support designing, building and deployment of data quality dashboards via PowerBI Determines escalation paths and constructs workflow and alerts which notify process and data owners of unresolved data quality issues Collaborates with IT & analytics teams to drive innovation (AI, ML, cognitive science etc.) Works with business functions and projects to create data quality improvement plans Sets targets for data improvements / maturity. Monitors and intervenes when sufficient progress is not being made Supports initiatives which are driving data clean-up of existing data landscape Owns and develops relevant data quality work products as part of the DAS data change methodology Ensures data quality aspects are delivered as part of Gold and Silver data related change projects Supports the creation of business cases with insight into the cost of poor data

Posted 3 weeks ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

Overall 10-15 years of experience with at least 5 years of proven experience as a Data Architect or similar role. Proficiency in database management systems, Experience with big data technologies Hadoop, Spark, data lake solution Azure Data Lake

Posted 3 weeks ago

Apply

2.0 - 5.0 years

7 - 11 Lacs

Hyderabad

Work from Office

Naukri logo

Data Platform Engineer About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. What you will do Roles & Responsibilities: Work as a member of a Data Platform Engineering team that uses Cloud and Big Data technologies to design, develop, implement and maintain solutions to support various functional areas like Manufacturing, Commercial, Research and Development. Work closely with the Enterprise Data Lake delivery and platform teams to ensure that the applications are aligned with the overall architectural and development guidelines Research and evaluate technical solutions including Databricks and AWS Services, NoSQL databases, Data Science packages, platforms and tools with a focus on enterprise deployment capabilities like security, scalability, reliability, maintainability, cost management etc. Assist in building and maintaining relationships with internal and external business stakeholders Develop basic understanding of core business problems and identify opportunities to use advanced analytics Assist in reviewing 3rd party providers for new feature/function/technical fit with department's data management needs. Work closely with the Enterprise Data Lake ecosystem leads to identify and evaluate emerging providers of data management & processing components that could be incorporated into data platform. Work with platform stakeholders to ensure effective cost observability and control mechanisms are in place for all aspects of data platform management. Experience developing in an Agile development environment, and comfortable with Agile terminology and ceremonies. Keen on embracing new responsibilities, facing challenges, and mastering new technologies What we expect of you Basic Qualifications and Experience: Master’s degree in computer science or engineering field and 1 to 3 years of relevant experience OR Bachelor’s degree in computer science or engineering field and 3 to 5 years of relevant experience OR Diploma and Minimum of 8+ years of relevant work experience Must-Have Skills: Experience with Databricks (or Snowflake), including cluster setup, execution, and tuning Experience with common data processing librariesPandas, PySpark, SQLAlchemy. Experience in UI frameworks (Angular.js or React.js) Experience with data lake, data fabric and data mesh concepts Experience with data modeling, performance tuning, and experience on relational databases Experience building ETL or ELT pipelines; Hands-on experience with SQL/NoSQL Program skills in one or more computer languages – SQL, Python, Java Experienced with software engineering best-practices, including but not limited to version control (Git, GitLab.), CI/CD (GitLab, Jenkins etc.), automated unit testing, and Dev Ops Exposure to Jira or Jira Align. Good-to-Have Skills: Knowledge on R language will be considered an advantage Experience in Cloud technologies AWS preferred. Cloud Certifications -AWS, Databricks, Microsoft Familiarity with the use of AI for development productivity, such as Github Copilot, Databricks Assistant, Amazon Q Developer or equivalent. Knowledge of Agile and DevOps practices. Skills in disaster recovery planning. Familiarity with load testing tools (JMeter, Gatling). Basic understanding of AI/ML for monitoring. Knowledge of distributed systems and microservices. Data visualization skills (Tableau, Power BI). Strong communication and leadership skills. Understanding of compliance and auditing requirements. Soft Skills: Excellent analytical and solve skills Excellent written and verbal communications skills (English) in translating technology content into business-language at various levels Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong problem-solving and analytical skills. Strong time and task leadership skills to estimate and successfully meet project timeline with ability to bring consistency and quality assurance across various projects. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now for a career that defies imagination Objects in your future are closer than they appear. Join us. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

1 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Specialist IS Bus Sys Analyst What you will do Let’s do this. Let’s change the world. In this vital role you will serve as a Business Systems Analyst supporting data analytics and visualization efforts across Amgen’s Global Procurement Operations . You will work with Product Owners, Data Engineers, and Business SMEs to deliver reporting and dashboards that unlock insights across Source-to-Settle processes. This role is critical to building Amgen’s analytics foundation for procurement transformation. . Roles & Responsibilities: Gather and document business reporting and analytics requirements across product teams. Design and build dashboards in tools such as Tableau, Power BI, or SAP Analytics Cloud. Translate procurement KPIs into actionable visualizations aligned with product team priorities. Collaborate with EDF and source system teams (SAP, Ariba, Apex, LevelPath) to define and access data sources. Perform data validation and user acceptance testing for new dashboards and enhancements. Enable self-service analytics capabilities for business users. Ensure compliance with Amgen’s data governance and visualization standards. What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master’s degree and 4 to 6 years of relevant experience OR Bachelor’s degree and 6 to 8 years of relevant experience OR Diploma and 10 to 12 years of professional experience. Preferred Qualifications: Must-Have Skills: Hands-on experience with Tableau, Power BI, or similar tools. Solid understanding of procurement data and Source-to-Settle processes. Ability to partner with cross-functional stakeholders to define KPIs and reporting needs. Good-to-Have Skills: Familiarity with data lake/data warehouse concepts (EDF preferred). Basic SQL or data wrangling skills to support ETL processes. Professional Certifications (Preferred): Tableau or Power BI certification; Agile certifications are a plus. Soft Skills: Exceptional communication and stakeholder management skills. Ability to collaborate across global teams and influence outcomes. Strong organizational skills to manage multiple priorities. Team-oriented and focused on delivering high-value solutions. Self-starter with a proactive approach to resolving challenges and driving innovation. What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies