Posted:1 month ago| Platform: Naukri logo

Apply

Work Mode

Hybrid

Job Type

Full Time

Job Description

Role & responsibilities

Functional Skills

  • Determining, creating, and implementing internal process improvements, such as redesigning infrastructure for increased scalability, improving data delivery, and automating manual procedures.
  • Building analytical tools that make use of the data flow and offer a practical understanding of crucial company performance indicators like operational effectiveness and customer acquisition.
  • Helping stakeholders, including the data, design, product, and executive teams, with technical data difficulties.
  • Working on data-related technical challenges while collaborating with stakeholders, including the Executive, Product, Data, and Design teams, to support their data infrastructure needs.
  • Remaining up-to-date with developments in technology and industry norms can help you to produce higher-quality results.

Technical Skills:

  • Analyze large datasets to derive actionable insights and support decision-making processes.
  • Develop and maintain data pipelines using PySpark and other data processing tools.
  • Write efficient SQL queries to extract, transform, and load data from various sources.
  • Implement data models and schemas to organize and optimize data storage and retrieval.
  • Perform data norplmalization and denormalization to ensure data integrity and accessibility.
  • Collaborate with data engineers to centralize and manage data assets.
  • Ensure data quality through validation and cleansing processes.
  • Utilize CI/CD pipelines to streamline data deployment and maintain continuous integration.

Preferred candidate profile

Qualifications:

  • Min 4 Years of Proven experience in data analytics and working with large datasets.
  • Proficiency in Python, including libraries such as Pandas and Numpy for data manipulation.
  • Strong SQL skills for querying and managing databases.
  • Experience with PySpark for large-scale data processing.
  • Basic understanding of Hadoop and its ecosystem.
  • Familiarity with data engineering concepts and best practices.
  • Knowledge of data modeling, including schemas, normalization, and denormalization techniques.
  • Understanding of data centralization, cardinality, and data quality principles.
  • Good to have experience in CI/CD pipelines and tools.

Banking

  • Deep understanding of banking operations, financial products, and regulatory frameworks
  • Experience with data modeling, ETL processes, and statistical analysis
  • Prior experience in retail or corporate banking analytics
  • Analyze banking data including customer transactions, loan performance, and financial statements
  • Support credit risk analysis and fraud detection initiatives
  • Maintain and optimize banking databases and data pipelines

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
KPMG Assurance and Consulting Services LLP logo
KPMG Assurance and Consulting Services LLP

Accounting

Worldwide Berlin

RecommendedJobs for You

Hyderabad, Telangana, India