Home
Jobs

5 - 10 years

11 - 20 Lacs

Posted:6 hours ago| Platform: Naukri logo

Apply

Work Mode

Work from Office

Job Type

Full Time

Job Description

Mandatory skills AWS,Python,Pyspark,SQL,Databricks Role & responsibilities Design, develop, and maintain robust and scalable data pipelines using AWS services and Databricks. Implement data processing solutions using PySpark and SQL to handle large volumes of data efficiently. Collaborate with cross-functional teams to gather requirements and deliver data solutions that meet business needs. Ensure data quality and integrity through rigorous testing and validation processes. Optimize data workflows for performance and cost-efficiency. Document data processes and provide support for data-related issues. Preferred candidate profile AWS Services: Proficiency in AWS services such as S3, EC2, Lambda, and Redshift. Programming: Strong experience in Python for data manipulation and scripting. Big Data Processing: Hands-on experience with PySpark for distributed data processing. SQL: Expertise in writing complex SQL queries for data extraction and transformation. Databricks: Experience in developing and managing workflows in Databricks environment.

Mock Interview

Practice Video Interview with JobPe AI

Start Pyspark Interview Now
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Sparshcorp Support Solutions
Sparshcorp Support Solutions

IT Services and Consulting

N/A

51-200 Employees

8 Jobs

    Key People

  • Nitin Sharma

    Founder & CEO
  • Priya Verma

    Chief Operations Officer

RecommendedJobs for You

Kolkata, Gurugram, Bengaluru

Hyderabad, Pune, Bengaluru