Home
Jobs

935 Data Bricks Jobs - Page 36

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5 - 10 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in the development and maintenance of the data platform components, contributing to the overall success of the project. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Assist with the data platform blueprint and design. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data platform components. Contribute to the overall success of the project. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Pune office. A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

7 - 12 years

9 - 13 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in the development and maintenance of the data platform components, contributing to the overall success of the project. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Assist with the data platform blueprint and design. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data platform components. Contribute to the overall success of the project. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Hyderabad office. 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5 - 10 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements in Hyderabad. You will play a crucial role in the development and implementation of innovative solutions. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead the application development process Conduct code reviews and ensure coding standards are met Implement best practices for application development Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform Strong understanding of data analytics and data processing Experience with cloud-based data platforms Knowledge of data modeling and database design Hands-on experience with data integration and ETL processes Additional Information: The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform This position is based at our Hyderabad office A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

5 - 10 years

9 - 13 Lacs

Pune

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in shaping the data platform components. Roles & Responsibilities: Expected to be an SME Collaborate and manage the team to perform Responsible for team decisions Engage with multiple teams and contribute on key decisions Provide solutions to problems for their immediate team and across multiple teams Lead data platform blueprint and design Implement data platform components Ensure seamless integration between systems and data models Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform Strong understanding of data platform architecture Experience in data modeling and integration Knowledge of cloud-based data solutions Hands-on experience with data platform implementation Additional Information: The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform This position is based at our Bengaluru office A 15 years full-time education is required Qualification 15 years full time education

Posted 1 month ago

Apply

7 - 12 years

9 - 13 Lacs

Bengaluru

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in the development and maintenance of the data platform components, contributing to the overall success of the project. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Assist with the data platform blueprint and design. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data platform components. Contribute to the overall success of the project. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Bengaluru office. 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

7 - 12 years

9 - 13 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in the development and maintenance of the data platform components, contributing to the overall success of the project. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Assist with the data platform blueprint and design. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data platform components. Contribute to the overall success of the project. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Chennai office. 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5 - 10 years

9 - 13 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Platform Engineer, you will assist with the data platform blueprint and design, collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. You will play a crucial role in the development and maintenance of the data platform components, contributing to the overall success of the organization. Roles & Responsibilities: Expected to be an SME, collaborate and manage the team to perform. Responsible for team decisions. Engage with multiple teams and contribute on key decisions. Provide solutions to problems for their immediate team and across multiple teams. Assist with the data platform blueprint and design. Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Develop and maintain data platform components. Contribute to the overall success of the organization. Professional & Technical Skills: Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform. Strong understanding of statistical analysis and machine learning algorithms. Experience with data visualization tools such as Tableau or Power BI. Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: The candidate should have a minimum of 5 years of experience in Databricks Unified Data Analytics Platform. This position is based at our Chennai office. A 15 years full time education is required. Qualification 15 years full time education

Posted 1 month ago

Apply

5 - 10 years

5 - 9 Lacs

Hyderabad

Work from Office

Naukri logo

Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :Data Engineering Sr. Advisor demonstrates expertise in data engineering technologies with the focus on engineering, innovation, strategic influence and product mindset. This individual will act as key contributor of the team to design, build, test and deliver large-scale software applications, systems, platforms, services or technologies in the data engineering space. This individual will have the opportunity to work directly with partner IT and business teams, owning and driving major deliverables across all aspects of software delivery.The candidate will play a key role in automating the processes on Databricks and AWS. They collaborate with business and technology partners in gathering requirements, develop and implement. The individual must have strong analytical and technical skills coupled with the ability to positively influence on delivery of data engineering products. The applicant will be working in a team that demands innovation, cloud-first, self-service-first, and automation-first mindset coupled with technical excellence. The applicant will be working with internal and external stakeholders and customers for building solutions as part of Enterprise Data Engineering and will need to demonstrate very strong technical and communication skills.Delivery ƒ¢¢"š¢" Intermediate delivery skills including the ability to deliver work at a steady, predictable pace to achieve commitments, decompose work assignments into small batch releases and contribute to tradeoff and negotiation discussions.Domain Expertise ƒ¢¢"š¢" Demonstrated track record of domain expertise including the ability to understand technical concepts necessary to do the job effectively, demonstrate willingness, cooperation, and concern for business issues and possess in-depth knowledge of immediate systems worked on.Problem Solving ƒ¢¢"š¢" Proven problem solving skills including debugging skills, allowing you to determine source of issues in unfamiliar code or systems and the ability to recognize and solve repetitive problems rather than working around them, recognize mistakes using them as learning opportunities and break down large problems into smaller, more manageable onesAbout The Role & Responsibilities:ƒ¢¢"š ¢The candidate will be responsible to deliver business needs end to end from requirements to development into production.ƒ¢¢"š ¢Through hands-on engineering approach in Databricks environment, this individual will deliver data engineering toolchains, platform capabilities and reusable patterns.ƒ¢¢"š ¢The applicant will be responsible to follow software engineering best practices with an automation first approach and continuous learning and improvement mindset.ƒ¢¢"š ¢The applicant will ensure adherence to enterprise architecture direction and architectural standards.ƒ¢¢"š ¢The applicant should be able to collaborate in a high-performing team environment, and an ability to influence and be influenced by others.Experience Required:ƒ¢¢"š ¢More than 12 years of experience in software engineering, building data engineering pipelines, middleware and API development and automationƒ¢¢"š ¢More than 3 years of experience in Databricks within an AWS environmentƒ¢¢"š ¢Data Engineering experienceExperience Desired:ƒ¢¢"š ¢Expertise in Agile software development principles and patternsƒ¢¢"š ¢Expertise in building streaming, batch and event-driven architectures and data pipelinesPrimary Skills: ƒ¢¢"š ¢Cloud-based security principles and protocols like OAuth2, JWT, data encryption, hashing data, secret management, etc.ƒ¢¢"š ¢Expertise in Big data technologies such as Spark, Hadoop, Databricks, Snowflake, EMR, Glueƒ¢¢"š ¢Good understanding of Kafka, Kafka Streams, Spark Structured streaming, configuration-driven data transformation and curationƒ¢¢"š ¢Expertise in building cloud-native microservices, containers, Kubernetes and platform-as-a-service technologies such as OpenShift, CloudFoundryƒ¢¢"š ¢Experience in multi-cloud software-as-a-service products such as Databricks, Snowflakeƒ¢¢"š ¢Experience in Infrastructure-as-Code (IaC) tools such as terraform and AWS cloudformationƒ¢¢"š ¢Experience in messaging systems such as Apache ActiveMQ, WebSphere MQ, Apache Artemis, Kafka, AWS SNSƒ¢¢"š ¢Experience in API and microservices stack such as Spring Boot, Quarkus, ƒ¢¢"š ¢Expertise in Cloud technologies such as AWS Glue, Lambda, S3, Elastic Search, API Gateway, CloudFrontƒ¢¢"š ¢Experience with one or more of the following programming and scripting languages ƒ¢¢"š¢" Python, Scala, JVM-based languages, or JavaScript, and ability to pick up new languagesƒ¢¢"š ¢Experience in building CI/CD pipelines using Jenkins, Github Actionsƒ¢¢"š ¢Strong expertise with source code management and its best practicesƒ¢¢"š ¢Proficient in self-testing of applications, unit testing and use of mock frameworks, test-driven development (TDD)ƒ¢¢"š ¢Knowledge on Behavioral Driven Development (BDD) approachAdditional Skills: ƒ¢¢"š ¢Ability to perform detailed analysis of business problems and technical environmentsƒ¢¢"š ¢Strong oral and written communication skillsƒ¢¢"š ¢Ability to think strategically, implement iteratively and estimate financial impact of design/architecture alternativesƒ¢¢"š ¢Continuous focus on an on-going learning and development Qualification 15 years full time education

Posted 1 month ago

Apply

4 years

15 - 23 Lacs

Pune

Work from Office

Naukri logo

The Role We are seeking an experienced Senior Software Data Engineer to join the Data Integrations Team, a critical component of the Addepar Platform team. The Addepar Platform is a comprehensive data fabric that provides a single source of truth for our product set, encompassing a centralized and self-describing repository, API driven data services, integration pipeline, analytics infrastructure, warehousing solutions, and operating tools. The Data Integrations team is responsible for the acquisition, conversion, cleansing, reconciliation, modeling, tooling, and infrastructure related to the integration of market and security master data from third-party data providers. This team plays a crucial role in our core business, enabling alignment across public and alternative investment data products and empowering clients to effectively manage their investment portfolios. As a Senior Software Data Engineer you will collaborate closely with product counterparts in an agile environment to drive business outcomes. Your responsibilities will include contributing to complex engineering projects using a modern and diverse technology stack, including PySpark, Python, AWS, Terraform, Java, Kubernetes and more. What You’ll Do Partner with multi-functional teams to design, develop, and deploy scalable data solutions that meet business requirements Build pipelines that support the ingestion, analysis, and enrichment of financial data by collaborating with business data analysts Advocate for standard methodologies, find opportunities for automation and optimizations in code and processes to increase the throughput and accuracy of data Develop and maintain efficient process controls and accurate metrics that improve data quality as well as increase operational efficiency Working in a fast-paced, dynamic environment to deliver high-quality results and drive continuous improvement Who You Are Minimum 5+ years of professional software data engineering experience A computer science degree or equivalent experience Proficiency with at least one object oriented programming language (Python OR Java) Proficiency with Pyspark,relational databases, SQL and data pipelines Rapid learner with strong problem solving skills Knowledge of financial concepts (e.g., stocks, bonds, etc.) is helpful but not necessary Experience in data modeling and visualisation is a plus Passion for the world of FinTech and solving previously intractable problems at the heart of investment management is a plus Experience with any public cloud is highly desired (AWS preferred). Experience with data-lake or data platforms like Databricks highly preferred. Important Note - This role requires working from our Pune office 3 days a week (Hybrid work model)

Posted 1 month ago

Apply

4 - 8 years

12 - 22 Lacs

Kochi, Gurugram, Bengaluru

Hybrid

Naukri logo

Project Role: Azure date engineer Work Experience: 4 to 8 Years Work location: Bangalore / Gurugram / Kochi Work Mode: Hybrid Must Have Skills: Azure Data engineer, SQL, Spark/Pyspark Job Overview: Responsible for the on-time completion of projects or components of large, complex projects for clients in the life sciences field. Identifies and elevates potential new business opportunities and assists in the sales process. Skills required: Experience in developing Azure components like Azure data factory, Azure data Bircks, Logic Apps, Functions Develop efficient & smart data pipelines in migrating various sources on to Azure datalake Proficient in working with Delta Lake, Parquet file formats Designs, implements, and maintain the CI/CD pipelines, deploy, merge codes Expert in programming in SQL, Pyspark, Python Creation of databases on Azure data lake with best data warehousing practises Build smart metadata databases and solutions, parameterization, configurations Develop Azure frameworks, develops automated systems for deployment & monitoring Hands-on experience in continuous delivery and continuous integration of CI/CD pipelines, CI/CD infrastructure and process troubleshooting. Extensive experience with version control systems like Git and their use in release management, branching, merging, and integration strategies Essential Functions: Participates or leads teams in the design, development and delivery of consulting projects or components of larger, complex projects. Reviews and analyzes client requirements or problems and assists in the development of proposals of cost effective solutions that ensure profitability and high client satisfaction. Provides direction and guidance to Analysts, Consultants, and where relevant, to Statistical Services assigned to engagement. Develops detailed documentation and specifications. Performs qualitative and/or quantitative analyses to assist in the identification of client issues and the development of client specific solutions. Designs, structures and delivers client reports and presentations that are appropriate to the characteristics or needs of the audience. May deliver some findings to clients. Qualifications Bachelor's Degree Req Master's Degree Business Administration Pref 4-8 years of related experience in consulting and/or life sciences industry Req.

Posted 1 month ago

Apply

7 - 10 years

15 - 20 Lacs

Mumbai

Work from Office

Naukri logo

Position Overview: The Databricks Data Engineering Lead role is ideal a highly skilled Databricks Data Engineer who will architect and lead the implementation of scalable, high-performance data pipelines and platforms using the Databricks Lakehouse ecosystem. The role involves managing a team of data engineers, establishing best practices, and collaborating with cross-functional stakeholders to unlock advanced analytics, AI/ML, and real-time decision-making capabilities. Key Responsibilities: Lead t he design and development of modern data pipelines, data lakes, and lakehouse architectures using Databricks and Apache Spark. Manage and mentor a team of data engineers, providing technical leadership and fostering a culture of excellence. Architect scalable ETL/ELT workflows to process structured and unstructured data from various sources (cloud, on-prem, streaming). Build and maintain Delta Lake tables and optimize performance for analytics, machine learning, and BI use cases. Collaborate with data scientists, analysts, and business teams to deliver high-quality, trusted, and timely data products. Ensure best practices in data quality, governance, lineage, and security, including the use of Unity Catalog and access controls. Integrate Databricks with cloud platforms (AWS, Azure, or GCP) and data tools (Snowflake, Kafka, Tableau, Power BI, etc.). Implement CI/CD pipelines for data workflows using tools such as GitHub, Azure DevOps, or Jenkins. Stay current with Databricks innovations and provide recommendations on platform strategy and architecture improvements Qualifications: Education : Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Experience : 7+ years of experience in data engineering, including 3+ years working with Databricks and Apache Spark . Proven leadership experience in managing and mentoring data engineering teams. Skills : Proficiency in PySpark, SQL, and experience with Delta Lake, Databricks Workflows, and MLflow. Strong understanding of data modeling, distributed computing, and performance tuning. Familiarity with one or more major cloud platforms (Azure, AWS, GCP) and cloud-native services. Experience implementing data governance and security in large-scale environments. Experience with real-time data processing using Structured Streaming or Kafka. Knowledge of data privacy, security frameworks, and compliance standards (e.g., PCIDSS, GDPR). Exposure to machine learning pipelines, notebooks, and ML Ops practices. Certifications : Databricks Certified Data Engineer or equivalent certification.

Posted 1 month ago

Apply

7 - 10 years

17 - 22 Lacs

Mumbai

Work from Office

Naukri logo

Position Overview: The Microsoft Cloud Data Engineering Lead role is ideal for an experienced Microsoft Cloud Data Engineer who will architect, build, and optimize data platforms using Microsoft Azure technologies. The role requires the candidate to have deep technical expertise in Azure data services, strong leadership capabilities, and a passion for building scalable, secure, and high-performance data ecosystems. Key Responsibilities: Lead the design, development, and deployment of enterprise-scale data pipelines and architectures on Microsoft Azure. Manage and mentor a team of data engineers, promoting best practices in cloud engineering, data modeling, and DevOps. Architect and maintain data platforms using Azure Data Lake Storage, Azure Synapse Analytics, Azure Data Factory, Azure Databricks, and Azure SQL/SQL MI. Develop robust ETL/ELT workflows for structured and unstructured data using Azure Data Factory and related tools. Collaborate with data scientists, analysts, and business units to deliver data solutions supporting advanced analytics, BI, and operational use cases. Implement data governance, quality, and security frameworks, leveraging tools such as Azure Purview and Azure Key Vault. Drive automation and infrastructure-as-code practices using Bicep, ARM templates, or Terraform with Azure DevOps or GitHub Actions. Ensure performance optimization and cost-efficiency across data pipelines and cloud environments. Stay current with Microsoft cloud advancements and help shape cloud strategy and data architecture roadmaps. Qualifications: Education : Bachelor’s or Master’s degree in Computer Science, Data Engineering, or related field. Experience : 7+ years of experience in data engineering, including 3+ years working with Microsoft Azure . Proven leadership experience in managing and mentoring data engineering teams. Skills : Expert knowledge of Azure Data Lake, Synapse Analytics, Data Factory, Databricks, and Azure SQL-based technologies. Proficiency in SQL, Python, and/or Spark for data transformation and analysis. Strong understanding of data governance, security, compliance (e.g., GDPR, PCIDSS), and privacy in cloud environments. Experience leading data engineering teams or cloud data projects from design to delivery. Familiarity with CI/CD pipelines, infrastructure as code, and DevOps practices within the Azure ecosystem Familiarity with Power BI and integration of data pipelines with BI/reporting tools Certifications : Microsoft Certified: Azure Data Engineer Associate or Azure Solutions Architect Expert.

Posted 1 month ago

Apply

8 - 12 years

13 - 17 Lacs

Mumbai, Bengaluru, Delhi / NCR

Work from Office

Naukri logo

Max NP: 15 Days. Data Engineer Azure a. Data Engineering and SQL b. Python c. PySpark d. Azure Data lake and ADF e. Databricks f. CICD g. Strong communication Minimum Qualifications Job Requirements: * Bachelors degree in CS. 8 years of hands-on experience in designing and developing distributed data pipelines. 5 years of hands-on experience in Azure data service technologies. 5 years of hands-on experience in Python, SQL, Object oriented programming, ETL and unit testing Experience with data integration with APIs, Web services, Queues Experience with Azure DevOps and CI/CD as well as agile tools and processes including JIRA, confluence. Excellent Communication skill Location: Chennai, Hyderabad,Kolkata, Pune, Ahmedabad, Bengaluru, Mumbai, Delhi, India

Posted 1 month ago

Apply

9 - 12 years

30 - 35 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Lets do this. Lets change the world. In this vital role We are seeking a strategic and hands-on Specialist Software Engineer / AI Engineer Search to lead the design, development, and deployment of AI-powered search and knowledge discovery solutions across our pharmaceutical enterprise. In this role, you'll manage a team of engineers and work closely with data scientists, oncologists, and domain experts to build intelligent systems that help users across R&D, medical, and commercial functions find relevant, actionable information quickly and accurately. Architect and lead the development of scalable, intelligent search systems leveraging NLP, embeddings, LLMs, and vector search Own the end-to-end lifecycle of search solutions, from ingestion and indexing to ranking, relevancy tuning, and UI integration Build systems that surface scientific literature, clinical trial data, regulatory content, and real-world evidence using semantic and contextual search Integrate AI models that improve search precision, query understanding, and result summarization (e.g., generative answers via LLMs). Partner with platform teams to deploy search solutions on scalable infrastructure (e.g., Kubernetes, cloud-native services, Databricks, Snowflake). Experience in Generative AI on Search Engines Experience in integrating Generative AI capabilities and Vision Models to enrich content quality and user engagement. Building and owning the next generation of content knowledge platforms and other algorithms/systems that create high quality and unique experiences. Designing and implementing advanced AI Models for entity matching, data duplication. Experience Generative AI tasks such as content summarization. deduping and metadata quality. Researching and developing advanced AI algorithms, including Vision Models for visual content analysis. Implementing KPI measurement frameworks to evaluate the quality and performance of delivered models, including those utilizing Generative AI. Developing and maintaining Deep Learning models for data quality checks, visual similarity scoring, and content tagging. Continually researching current and emerging technologies and proposing changes where needed. Implement GenAI solutions, utilize ML infrastructure, and contribute to data preparation, optimization, and performance enhancements. Manage and mentor a cross-functional engineering team focused on AI, ML, and search infrastructure. Foster a collaborative, high-performance engineering culture with a focus on innovation and delivery. Work with domain experts, data stewards, oncologists, and product managers to align search capabilities with business and scientific need Basic Qualifications: Degree in computer science & engineering preferred with 9-12 years of software development experience Proficient in Spark, Kafka, Snowflake, Delta Lake, Hadoop, databricks, Mongo DB, S3 Buckets, ELT& ETL, API Integrations, Java Proven experience building search systems with technologies like Elasticsearch, Solr, OpenSearch, or vector DBs (e.g., Pinecone, FAISS). Hands-on experience with various AI models, GCP Search Engines, GCP Cloud services Strong understanding of NLP, embeddings, transformers, and LLM-based search applications Proficient in programming language AI/ML, Python, GraphQL, Java Crawlers, Java Script, SQL/NoSQL, Databricks/RDS, Data engineering, S3 Buckets, dynamo DB Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experience deploying ML services and search infrastructure in cloud environments (AWS, Azure, or GCP) Preferred Qualifications: Experience in AI/ML, Java, Rest API, Python React, GraphQL, NLMS, Full stack applications, Solr Search Experienced with Fast Pythons API Experience with design patterns, data structures, data modelling, data algorithms Knowledge of ontologies and taxonomies such as MeSH, SNOMED CT, UMLS, or MedDRA. Familiarity with MLOps, CI/CD for ML, and monitoring of AI models in production. Experienced with AWS /Azure Platform, building and deploying the code Experience in PostgreSQL /Mongo DB SQL database, vector database for large language models, Databricks or RDS, S3 Buckets Experience in Google cloud Search and Google cloud Storage Experience with popular large language models Experience with LangChain or LlamaIndex framework for language models Experience with prompt engineering, model fine tuning Knowledge of NLP techniques for text analysis and sentiment analysis Experience with generative AI or retrieval-augmented generation (RAG) frameworks in pharma/biotech setting Experience in Agile software development methodologies Experience in End-to-End testing as part of Test-Driven Development Good to Have Skills Willingness to work on Full stack Applications Experience working with biomedical or scientific data (e.g., PubMed, clinical trial registries, internal regulatory databases) Soft Skills: Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to work effectively with global, remote teams. High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.

Posted 1 month ago

Apply

4 - 9 years

10 - 16 Lacs

Bengaluru

Remote

Naukri logo

Role : Senior Data Analyst Interested candidates, please share your resume with hr@hish.ca Location: Remote(Banglore,Chennai,Pune,Gurgaon) Job type: Full time Pay : 10LPA - 16 LPA(Based on Experience) Start Date : ASAP Company : https://hish.ca/ (Canada Based Company) Job Description: Degree in Computer Science, Information Technology, or equivalent STEM fields. Minimum of 5 years of experience in an analytics domain. Proven experience working with senior business stakeholders (essential). Strong understanding and experience with cloud platforms, preferably Databricks or Azure (essential). Hands-on experience with SQL, Python or PySpark, with the ability to write production level code (essential). Proficiency with Power BI for analytics and data visualization (essential). Advanced knowledge of data wrangling and data transformation techniques for cleaning and translating raw data into usable forms. Familiarity with data governance, master data, and metadata management. Knowledge of Power Apps, Power Automate, and UI Path is a significant plus but not mandatory. Experience in Manufacturing or Supply Chain is highly desirable. Knowledge of SAP S4 and understanding of relevant data tables is a significant plus. Willingness to take need-based calls in the US time zone.

Posted 1 month ago

Apply

3 - 8 years

4 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Role & responsibilities Design, build, and maintain scalable and efficient data pipelines and ETL/ELT processes. Develop and optimize data models for analytics and operational purposes in cloud-based data warehouses (e.g., Snowflake, Redshift, BigQuery). Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and deliver reliable datasets. Implement data quality checks, monitoring, and alerting for pipelines. Work with structured and unstructured data across various sources (APIs, databases, streaming). Ensure data security, compliance, and governance practices are followed. Write clean, efficient, and testable code using Python, SQL, or Scala. Support the development of data catalogs and documentation. Participate in code reviews and contribute to best practices in data engineering. Preferred candidate profile 3- 9 years of hands-on experience in data engineering or a similar role. Strong proficiency in SQL and Python, Pyspark. Experience with data pipeline orchestration tools like Apache Airflow, Prefect, or Luigi.( Any Of the skill) Familiarity with cloud platforms such as AWS, Azure, or GCP (e.g., S3, Lambda, Glue, BigQuery, Dataflow).( Any of the Skill) Experience with big data tools such as Spark, Kafka, Hive, or Hadoop.(ANy One) Strong understanding of relational and non-relational databases. Exposure to CI/CD practices and tools (e.g., Git, Jenkins, Docker). Excellent problem-solving and communication skills.

Posted 1 month ago

Apply

10 - 20 years

30 - 35 Lacs

Navi Mumbai

Work from Office

Naukri logo

Job Title: Big Data Developer Project Support & Mentorship Location: Mumbai Employment Type: Full-Time/Contract Department: Engineering & Delivery Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.

Posted 1 month ago

Apply

6 - 11 years

25 - 40 Lacs

Pune

Hybrid

Naukri logo

Role Definition: Data Scientists focus on researching and developing AI algorithms and models. They analyse data, build predictive models, and apply machine learning techniques to solve complex problems. Skills: • Proficient: Languages/Framework: Fast API, Azure UI Search API (React) o Databases and ETL: Cosmos DB (API for MongoDB), Data Factory Data Bricks o Proficiency in Python and R o Cloud: Azure Cloud Basics (Azure DevOps) o Gitlab: Gitlab Pipeline o Ansible and REX: Rex Deployment o Data Science: Prompt Engineering + Modern Testing o Data mining and cleaning o ML (Supervised/unsupervised learning) o NLP techniques, knowledge of Deep Learning techniques include RNN, transformers o End-to-end AI solution delivery o AI integration and deployment o AI frameworks (PyTorch) o MLOps frameworks o Model deployment processes o Data pipeline monitoring Expert: (in addition to proficient skills) o Languages/Framework: Azure Open AI o Data Science: Open AI GPT Family of models 4o/4/3, Embeddings + Vector Search o Databases and ETL: Azure Storage Account o Expertise in machine learning algorithms (supervised, unsupervised, reinforcement learning) o Proficiency in deep learning frameworks (TensorFlow, PyTorch) o Strong mathematical foundation (linear algebra, calculus, probability, statistics) o Research methodology and experimental design o Proficiency in data analysis tools (Pandas, NumPy, SQL) o Strong statistical and probabilistic modelling skills o Data visualization skills (Matplotlib, Seaborn, Tableau) o Knowledge of big data technologies (Spark, Hive) o Experience with AI-driven analytics and decision-making systems

Posted 1 month ago

Apply

5 - 10 years

20 - 35 Lacs

Noida, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Job Description / Skill set: 5+ years of related experience with a Bachelors degree; consulting experience preferred. 5+ years of hands-on experience in data engineering /ETL using Databricks on AWS / Azure cloud infrastructure and functions. 3+ years of experience in PBI and Data Warehousing experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with AWS (e.g. S3, Athena, Glue, Lambda, etc.) preferred. Deep understanding of data warehousing concepts (Dimensional (star-schema), SCD2, Data Vault, Denormalized, OBT) implementing highly performant data ingestion pipelines from multiple sources Strong proficiency in Python and SQL. Deep understanding of Databricks platform features (Delta Lake, Databricks SQL, MLflow) Experience with CI/CD on Databricks using tools such as BitBucket, GitHub Actions, and Databricks CLI -nyea Integrating the end-to-end Databricks pipeline to take data from source systems to target data repositories ensuring the quality and consistency of data is always maintained. Working within an Agile delivery / DevOps methodology to deliver proof of concept and production implementation in iterative sprints. Experience with Delta Lake, Unity Catalog, Delta Sharing, Delta Live Tables (DLT), MLflow. Basic working knowledge of API or Stream based data extraction processes like Salesforce API, Bulk API. Understanding of Data Management principles (quality, governance, security, privacy, life cycle management, cataloguing) Excellent problem-solving and analytical skills Able to Work Independently Excellent oral and written communication skills Nice to have: Databricks certifications and AWS Solution Architect certification. Nice to have: experience with building data pipeline from various business applications like Salesforce, Marketo, NetSuite, Workday etc.

Posted 1 month ago

Apply

5 - 10 years

9 - 12 Lacs

Chennai, Bengaluru

Hybrid

Naukri logo

A Senior Data Engineer specializing in Python, SQL, dbt, and Databrick Job Title: Senior Data Engineer Shift: 2.30 PM to 11.30 PM Location: Chennai/Bengaluru (Hybrid) Certification is added advantage Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines using dbt and Databricks. Optimize SQL queries for efficient data processing and analytics. Implement data modeling best practices to support business intelligence and reporting needs. Ensure data integrity, security, and governance across various platforms. Collaborate with data scientists, analysts, and business teams to understand data requirements. Monitor and troubleshoot data pipelines to ensure reliability and performance. Work with cloudbased data warehouses and big data technologies to manage large datasets. Automate data workflows using Python and integrate with various APIs. Qualifications: 5+ years of experience in data engineering or a related field. Strong proficiency in Python for data processing and automation. Expertise in SQL for querying and managing relational databases. Hands-on experience with dbt for data transformation and modeling. Familiarity with Databricks and Apache Spark for big data processing.Experience with cloud platforms like AWS, Azure, or GCP. Knowledge of CI/CD pipelines for data deployment. Strong problem-solving skills and ability to optimize data workflows. Brief note from client- He/she will be working on multiple projects to ingest data and create production-grade data pipelines. Currently, Need to ingest data from Verint Cloud using their APIs into Databricks. Also have a need to create data pipelines (using Python, SQL, dbt) to create data products for Financial reporting. The exact projects may change over time, depending on the timing of the new hire. Client is looking for a senior data engineer who is technically proficient and requires minimal hand-holding. He/she would be expected to troubleshoot and solve problems on their own, and produce high quality data products. Hope that gives you a good idea. Also, he/she would be required to have a good workday overlap with our team. I Client is expecting that resource can work till around 2 pm US Central Time

Posted 1 month ago

Apply

6 - 11 years

16 - 27 Lacs

Kolkata, Ahmedabad, Bengaluru

Work from Office

Naukri logo

Role & responsibilities 1. Strong experience in Azure Data Engineering 2. Experience in Python/Pyspark 3. Experience ADF AND Databricks

Posted 1 month ago

Apply

4 - 7 years

15 - 30 Lacs

Hyderabad

Work from Office

Naukri logo

Minimum of 4 -9 years of experience in ETL development using IICS-CDI (Cloud Data Integration), including experience with IICS Cloud Console and PowerCenter Designer Design, develop and implement ETL solutions using IICS - CDI to extract, transform and load data from various sources into data warehouse Strong understanding of data warehousing concepts, ETL frameworks, and best practices Solid experience with SQL and database technologies such as SQL Server, Oracle etc. (Preferably Azure Synapse) Should be Familiar with data modeling and data integration techniques Be able to Work with cross-functional teams to understand business requirements and translate them into technical specifications for ETL development Develop and maintain ETL mappings, workflows, and schedules using IICS - CDI Ensure data quality, integrity, and consistency by performing data validation, cleansing, and enrichment activities Monitor ETL jobs to ensure successful completion, identify and resolve errors and performance issues Nice to have - Experience with Informatica Cloud Secure Agents Familiarity with cloud technologies such as Azure, AWS or Google Cloud Platform Experience with scripting languages such as Python or Shell scripting Excellent analytical and problem-solving skills, with a keen attention to detail Ability to work independently and in a team-oriented, collaborative environment Strong communication and interpersonal skills Familiarity with Agile development methodologies Preferred Certifications: Informatica Cloud Data and Application Integration R38, Professional Certification AZ-900: Microsoft Azure Fundamentals DP-900: Microsoft Azure Data Fundamentals DP-203: Data Engineering on Microsoft Azure

Posted 1 month ago

Apply

3 - 6 years

20 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Overview As a member of the Platform engineering team, you will be the key techno functional expert leading and overseeing PepsiCo's Platforms & operations and drive a strong vision for how Platforms engineering can proactively create a positive impact on the business. You'll be an empowered member of a team of Platform engineers who build Platform products for platform optimization and cost optimization and build tools for Platform ops and Data Ops on the PepsiCo Data Lake, and enable exploration and access for analytics, visualization, machine learning, and product development efforts across the company. As member of the Platform engineering team, you will help in managing platform Governance team that builds frameworks to guardrail the platforms of very large and complex data applications in public cloud environments and directly impact the design, architecture, and implementation of PepsiCo's flagship data products around topics like revenue management, supply chain, manufacturing, and logistics. You will work closely with process owners, product owners and business users. You'll be working in a hybrid environment with in-house, on-premises data sources as well as cloud and remote systems. Responsibilities Active contributor to cost optimization of platforms and services. Manage and scale Azure Data Platforms to support new product launches and drive Platform Stability and Observability across data products. Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for Data Platforms for cost and performance. Responsible for implementing best practices around systems integration, security, performance and Platform management. Empower the business by creating value through the increased adoption of data, data science and business intelligence landscape. Collaborate with internal clients (data science and product teams) to drive solutioning and POC discussions. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects and strategic internal and external partners. Develop and optimize procedures to production Alize data science models. Define and manage SLAs for Platforms and processes running in production. Support large-scale experimentation done by data scientists. Prototype new approaches and build solutions at scale. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create and audit reusable packages or libraries. Qualifications 2+ years of overall technology experience that includes at least 4+ years of hands-on software development, Program management, and data engineering 1+ years of experience with Data Lake Infrastructure, Data Warehousing, and Data Analytics tools. 1+ years of experience in Databricks optimization and performance tuning Experience in managing multiple teams and coordinating with different stakeholders to implement the vision of the team. Fluent with Azure cloud services. Azure Certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modeling, data warehousing, and building high-volume ETL/ELT pipelines. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one MPP database technology such as Redshift, Synapse or SnowFlake. Experience with version control systems like Github and deployment & CI tools. Experience with Azure Data Factory, Azure Databricks. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus Understanding of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Familiarity with business intelligence tools (such as PowerBI).

Posted 1 month ago

Apply

7 - 11 years

50 - 60 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote

Posted 1 month ago

Apply

5 - 10 years

20 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

About The Role : Job Title Transformation Principal Change Analyst Corporate TitleAVP LocationBangalore, India Role Description We are looking for an experienced Change Manager to lead a variety of regional/global change initiatives. Utilizing the tenets of PMI, you will lead cross-functional initiatives that transform the way we run our operations. If you like to solve complex problems, have a gets things done attitude and are looking for a highly visible dynamic role where your voice is heard and your experience is appreciated, come talk to us What we'll offer you As part of our flexible scheme, here are just some of the benefits that youll enjoy, Best in class leave policy. Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Responsible for change management planning, execution and reporting adhering to governance standards ensuring transparency around progress status; Using data to tell the story, maintain risk management controls, monitor and communicate initiatives risks; Collaborate with other departments as required to execute on timelines to meet the strategic goals As part of the larger team, accountable for the delivery and adoption of the global change portfolio including by not limited to business case development/analysis, reporting, measurements and reporting of adoption success measures and continuous improvement. As required, using data to tell the story, participate in Working Group and Steering Committee to achieve the right level of decision making and progress/ transparency, establishing strong partnership and collaborative relationships with various stakeholder groups to remove constraints to success and carry forward to future projects. As required, developing and documenting end-to-end roles and responsibilities, including process flow, operating procedures, required controls, gathering and documenting business requirements (user stories)including liaising with end-users and performing analysis of gathered data. Heavily involved in product development journey Your skills and experience Overall experience of at least 7-10 years leading complex change programs/projects, communicating and driving transformation initiatives using the tenets of PMI in a highly matrixed environment Banking / Finance/ regulated industry experience of which at least 2 years should be in change / transformation space or associated with change/transformation initiatives a plus Knowledge of client lifecycle processes, procedures and experience with KYC data structures / data flows is preferred. Experience working with management reporting is preferred. Bachelors degree How we'll support you Training and development to help you excel in your career. Coaching and support from experts in your team. A culture of continuous learning to aid progression. A range of flexible benefits that you can tailor to suit your needs. About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies