Posted:1 day ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

  • Primary skill:

    Databricks


  • Diagnose client problem areas, design innovative solutions, and facilitate deployment ensuring client satisfaction
  • Own parts of proposal documents, contribute to solution design, and prepare effort estimates aligned with client budgets
  • Conduct solution/product demonstrations, Proof of Concepts (POC), and Proof of Technology workshops
  • Design, develop, and maintain scalable

    data pipelines

    and

    ETL processes

    using Databricks and Apache Spark
  • Implement data transformations, cleansing, and enrichment for large datasets across

    Azure, AWS, or GCP

    cloud environments
  • Optimize Spark jobs for performance and cost efficiency in distributed computing environments
  • Ensure compliance with data security, governance, and industry standards
  • Troubleshoot and resolve complex data processing, integration, and performance bottlenecks
  • Develop reusable components and maintain version control using Git or similar tools
  • Document technical designs, workflows, and best practices for team enablement
  • Actively lead small to mid-sized projects, mentor junior engineers, and contribute to organizational initiatives
  • Collaborate with cross-functional teams including data engineers, analysts, and business stakeholders to deliver optimized solutions
  • Provide client interfacing, project management, and team leadership to ensure high-quality delivery
  • Ability to develop value-creating strategies and models that enable clients to innovate and grow
  • Strong logical thinking, problem-solving, and collaboration skills
  • Understanding of financial processes, pricing models, and industry domain knowledge
  • Good knowledge of software configuration management systems
  • Awareness of latest technologies and industry trends
  • One or two industry domain knowledge preferred
  • Required technical expertise: Databricks, Apache Spark, Python/Scala, SQL, ETL concepts, data warehousing, data modeling
  • Experience with cloud platforms: Azure Data Lake, AWS S3, GCP
  • Familiarity with CI/CD pipelines, Git, and DevOps practices for data solutions
  • Eligibility: 10+ years in Data Engineering / Consulting with strong Databricks expertise
  • Work model: – WFO
  • Availability: Immediate joiner preferred
  • Locations:

    Bangalore & Chennai

Mock Interview

Practice Video Interview with JobPe AI

Start Python Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now

RecommendedJobs for You

hyderabad, telangana, india