Posted:2 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Contractual

Job Description

• Lead the end-to-end migration of data pipelines and workloads from Teradata to Databricks.

• Analyze existing Teradata workloads to identify optimization opportunities and migration strategies.

• Design and implement scalable, efficient, and cost-effective data pipelines using PySpark/Scala on Databricks

Skills Required:

• Strong experience with Databricks, Apache Spark, and Delta Lake.

• Strong problem-solving and communication skills.

• Proficiency in PySpark, SQL, and optionally Scala.

• Strong understanding of Spark, Databricks performance tuning techniques

• Understanding of Teradata utilities, BTEQ scripts, stored procedures

• Experience with cloud platforms (Azure, AWS, or GCP), preferably Azure Databricks.

• Familiarity with data warehousing concepts, data modeling, and big data best practices.

• Experience with version control and CI/CD tools.


Location


Relevant Experience -

Mandatory skills

Desired skills -

Domain (Industry) -


Total Experience (Ex. 5-7 Years) -


Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Skills

Practice coding challenges to boost your skills

Start Practicing Now

RecommendedJobs for You

hyderabad, telangana, india

hyderabad, telangana, india

thiruvananthapuram, all india

pune, gurugram, bengaluru