Posted:3 weeks ago|
Platform:
Hybrid
Full Time
Role & responsibilities Data Pipeline Development: Design, build, and maintain scalable ETL/ELT data pipelines using PySpark and Python . Process structured and unstructured data from various sources (e.g., APIs, files, databases). Data Integration: Integrate data from internal and external sources into data lakes or warehouses (like AWS S3, Azure Data Lake, or Hadoop HDFS). Database Management: Write optimized SQL queries for data extraction, transformation, and aggregation. Ensure data quality and integrity during ingestion and processing. Preferred candidate profile 8 - 12 years of hands-on experience in Data Engineering or ETL development Strong proficiency in Python , PySpark , and SQL Experience in building and optimizing ETL pipelines for large-scale data Excellent problem-solving and communication skills Ability to work independently and in a team-oriented environment
Consulting Krew
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hyderabad, Telangana, India
Experience: Not specified
Salary: Not disclosed
Bengaluru
7.5 - 15.0 Lacs P.A.
Kolkata, Gurugram, Bengaluru
9.5 - 19.5 Lacs P.A.
Hyderabad
16.0 - 22.5 Lacs P.A.
Hyderabad, Chennai, Bengaluru
7.0 - 17.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
9.0 - 18.0 Lacs P.A.
Bengaluru
13.0 - 17.0 Lacs P.A.
Bengaluru
30.0 - 35.0 Lacs P.A.
Bengaluru
12.0 - 16.0 Lacs P.A.