Posted:2 days ago|
Platform:
Hybrid
Full Time
We are looking for a skilled Data Engineer with hands-on experience in Airflow , Python , AWS , and Big Data technologies like Spark to join our dynamic team. Key Responsibilities Design and implement data pipelines and workflows using Apache Airflow Develop robust and scalable data processing applications using Python Leverage AWS services (S3, EMR, Lambda, Glue, Redshift, etc.) for data engineering and ETL pipelines Work with Big Data technologies like Apache Spark to process large-scale datasets Optimize and monitor data pipelines for performance, reliability, and scalability Collaborate with Data Scientists, Analysts, and Business teams to understand data needs and deliver solutions Ensure data quality, consistency, and governance across all data pipelines Document processes, pipelines, and best practices Mandatory Skills Apache Airflow - workflow orchestration and scheduling Python - strong programming skills for data engineering AWS - hands-on experience with core AWS data services Big Data technologies particularly Apache Spark Location: Hyderabad (Hybrid) Please share your resume with +91 9361912009
Kryon Knowledge Works
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Mock Interview
Chennai, Coimbatore, Bengaluru
15.0 - 25.0 Lacs P.A.
Kolkata, Hyderabad, Pune
10.0 - 20.0 Lacs P.A.
Hyderabad
12.0 - 17.0 Lacs P.A.
Bengaluru
20.0 - 35.0 Lacs P.A.
Vadodara
15.0 - 30.0 Lacs P.A.
Trivandrum, Kerala, India
Experience: Not specified
Salary: Not disclosed
Ahmedabad
Experience: Not specified
Salary: Not disclosed
Hyderabad
7.0 - 12.0 Lacs P.A.
Salary: Not disclosed
Hyderabad
5.0 - 10.0 Lacs P.A.