SQL, Python, Spark, Databricks

7 - 11 years

7 - 11 Lacs

Hyderabad / Secunderabad, Telangana, Telangana, India

Posted:1 day ago| Platform: Foundit logo

Apply

Skills Required

Airflow / Prefect Designing & Building Frameworks snowflake AWS / Azure / GCP Leading Teams

Work Mode

On-site

Job Type

Full Time

Job Description

Key Responsibilities: Develop and optimize data pipelines using Spark and Databricks . Write complex SQL queries to analyze and manipulate large datasets. Implement Python-based scripts for data processing and automation. Design and maintain ETL workflows for structured and unstructured data. Collaborate with cross-functional teams to ensure high-performance data architectures. Ensure data quality, governance & security within the pipelines. Mandatory Skills: Strong proficiency in SQL , Python , Spark , and Databricks . Hands-on experience with distributed computing frameworks. Good-to-Have Skills (Optional): Experience with Airflow / Prefect for workflow orchestration. Knowledge of Snowflake for cloud data warehousing. Experience with designing & building frameworks for data processing and/or data quality Experiencewith AWS / Azure / GCP cloud environments. Experience with Data Modeling Exposure to Kafka for real-time data streaming. Experience with NoSQL databases Exposure or Knowledge of Data Visualization tools like Power BI, Google Looker, Tableau, etc. Preferred Qualifications: Bachelor's/Master's degree in Computer Science, Engineering, or related field. Strong analytical and problem-solving skills. Effective communication and teamwork abilities.

Mock Interview

Practice Video Interview with JobPe AI

Start Airflow / Prefect Interview Now
Tavant

10 Jobs

RecommendedJobs for You

Hyderabad / Secunderabad, Telangana, Telangana, India

Bengaluru / Bangalore, Karnataka, India

Bengaluru / Bangalore, Karnataka, India

Hyderabad / Secunderabad, Telangana, Telangana, India

Hyderabad / Secunderabad, Telangana, Telangana, India