Posted:3 months ago|
Platform:
Work from Office
Full Time
Job Title: Data Engineer Job Summary: We are looking for a skilled Data Engineer to join our team and contribute to building scalable, high-performance data pipelines and infrastructure. The ideal candidate will have extensive experience with SQL, PySpark, AWS Glue, and Amazon Redshift. You will be responsible for designing, developing, and optimizing data workflows, enabling actionable insights from large datasets. Key Responsibilities: Develop, test, and maintain robust data pipelines using PySpark and AWS Glue. Design and optimize data models for Amazon Redshift to ensure high-performance analytics. Write complex SQL queries for data extraction, transformation, and analysis. Implement ETL workflows to process large datasets across multiple systems. Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver solutions. Monitor and maintain data pipeline performance, reliability, and scalability. Ensure data quality and integrity through validation and error-handling mechanisms. Utilize Infrastructure as Code (IaC) for deploying and managing data infrastructure. Document data workflows, processes, and systems for operational and troubleshooting purposes. Required Skills & Qualifications: Proficiency in SQL for complex querying and performance optimization. Hands-on experience with PySpark for big data processing and transformations. Expertise in AWS Glue for ETL development and data catalog management. Strong knowledge of Amazon Redshift, including schema design, performance tuning, and workload management. Familiarity with data lake architecture and related AWS services (e.g., S3, Athena, Lambda). Understanding of distributed computing principles and big data frameworks. Experience with version control systems like Git. Strong problem-solving and analytical skills. Preferred Skills: Familiarity with data pipeline orchestration tools (e.g., Apache Airflow, Step Functions). Experience with real-time data processing and streaming frameworks. Knowledge of AWS infrastructure and best practices for cost optimization. Exposure to machine learning workflows and tools. Familiarity with Agile methodologies and CI/CD pipelines. If Interested Please contact 8971971804(Mariam)
Alp Consulting
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections Alp Consulting
Chennai
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
10.0 - 20.0 Lacs P.A.
Chennai
0.5 - 0.6 Lacs P.A.
Hyderabad, Chennai, Bengaluru
9.5 - 15.0 Lacs P.A.
Bengaluru
7.0 - 17.0 Lacs P.A.
Hyderabad
15.0 - 30.0 Lacs P.A.
Pune
15.0 - 30.0 Lacs P.A.
Chennai, Bengaluru
15.0 - 20.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
10.0 - 19.0 Lacs P.A.
Hyderābād
2.51046 - 7.5 Lacs P.A.