Posted:1 week ago|
Platform:
On-site
Full Time
Job Description Job Title: Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. Job Description: We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics. Responsibilities: Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently. Collaborate with analysts to understand data requirements and ensure data availability and quality. Write and optimize SQL queries for data extraction, transformation, and loading. Utilize Git for version control, ensuring proper documentation and tracking of code changes. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval. Develop and optimize high-performance, scalable databases using Amazon DynamoDB. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations. Automate workflows using AWS Cloud services like event bridge, step functions. Monitor and optimize data processing workflows for performance and scalability. Troubleshoot data-related issues and provide timely resolution. Stay up-to-date with industry best practices and emerging technologies in data engineering. Qualifications: Bachelor's degree in Computer Science, Data Science, or a related field. Master's degree is a plus. Strong proficiency in PySpark and Python for data processing and analysis. Proficiency in SQL for data manipulation and querying. Experience with version control systems, preferably Git. Familiarity with AWS services, including S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit etc. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills to work effectively within a team. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment. Preferred Skills: Knowledge of data warehousing concepts and data modeling. Familiarity with big data technologies like Hadoop and Spark. AWS certifications related to data engineering. Join our team and contribute to our mission of turning data into actionable insights. If you're a motivated data engineer with expertise in PySpark, Python, SQL, Git, and AWS, we want to hear from you. Apply now to be part of our innovative and dynamic data engineering team. Show more Show less
EXL
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
My Connections EXL
Gurgaon, Haryana, India
Salary: Not disclosed
Gurgaon, Haryana, India
Salary: Not disclosed
Gurgaon, Haryana, India
Experience: Not specified
Salary: Not disclosed
Noida, Uttar Pradesh, India
Salary: Not disclosed
Noida, Uttar Pradesh, India
Salary: Not disclosed
Bengaluru / Bangalore, Karnataka, India
Experience: Not specified
Salary: Not disclosed
Noida, Uttar Pradesh, India
Experience: Not specified
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed
Chennai, Tamil Nadu, India
Salary: Not disclosed