Posted:3 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

About Company


They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society.


About Client:


Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations.

Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines.

Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries.


  • JD:

  • 4+ years of relevant IT experience in the BI/DW domain with minimum 2 years of hands-on

    experience on Azure modern data platform that includes Data Factory, Databricks, Synapse (Azure SQL DW) and Azure Data Lake
  • • Meaningful experience of data analysis and transformation using Python/R/Scala on Azure Databricks or Apache Spark
  • • Well versed NoSQL data store concepts
  • • Good knowledge in Distributed Processing using Databricks (preferred) or Apache Spark
  • • Ability to debug using tools like Ganglia UI, expertise in Optimizing Spark Jobs
  • • The ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets
  • • Expert in creating data structures optimized for storage and various query patterns for e.g. Parquet and Delta Lake
  • • Meaningful experience in at least one database technology in each segment such as:
  • o Traditional RDBMS (MS SQL Server, Oracle)
  • o MPP (Teradata, Netezza)
  • o NoSQL (MongoDB, Cassandra, Neo4J, CosmosDB, Gremlin)
  • • Understanding of Information Security principles to ensure compliant handling and management of data
  • • Experience in traditional data warehousing / ETL tools (Informatica, IBM Datastage, Microsoft SSIS)
  • • Effective in communication
  • • Proficient at working with large and complex code bases (Github, Gitflow, Fork/Pull Model)
  • • Working experience in Agile methodologies (SCRUM, XP, Kanban)
  • • Data Modelling - One to Three Years
  • • Developer / Software Engineer - One to Three Years
  • • PSP Defined SCU in Data Engineering_Data Engineer
  • Location:

    Pune
  • Grade:

    6Years

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

bengaluru, karnataka, india

bengaluru, karnataka, india