AWS Data Engineer

0 years

0 Lacs

Posted:3 days ago| Platform: Linkedin logo

Apply

Work Mode

On-site

Job Type

Full Time

Job Description

At EXL, we go beyond capabilities to focus on collaboration and character, tailoring solutions to your unique needs, culture, goals, and technology environments. We specialize in transformation, data science, and change management to enhance efficiency, improve customer relationships, and drive revenue growth. Our expertise in analytics, digital interventions, and operations management helps you outperform the competition with sustainable models at scale. As your business evolution partner, we optimize data leverage for better business decisions and intelligence-driven operations. For more information, visit www.exlservice.com.


Job Title - Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena. 


Role Description

We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics


Responsibilities:

1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently.

2. Collaborate with analysts to understand data requirements and ensure data availability and quality.

3. Write and optimize SQL queries for data extraction, transformation, and loading.

4. Utilize Git for version control, ensuring proper documentation and tracking of code changes.

5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval.

6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB.

7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations.

8. Automate workflows using AWS Cloud services like event bridge, step functions.

9. Monitor and optimize data processing workflows for performance and scalability.

10. Troubleshoot data-related issues and provide timely resolution.

11. Stay up-to-date with industry best practices and emerging technologies in data engineering.

Qualifications:

2. Strong proficiency in PySpark and Python for data processing and analysis.

3. Proficiency in SQL for data manipulation and querying.

4. Experience with version control systems, preferably Git.

S3, Redshift, Glue, Step Functions, Event Bridge, CloudWatch, Lambda, Quicksight, DynamoDB, Athena, CodeCommit

Databricks

7. Excellent problem-solving skills and attention to detail.

8. Strong communication and collaboration skills to work effectively within a team.

9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment.

Preferred Skills:

2. Familiarity with big data technologies like Hadoop and Spark.

3. AWS certifications related to data engineering.

Mock Interview

Practice Video Interview with JobPe AI

Start PySpark Interview
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

coding practice

Enhance Your Python Skills

Practice Python coding challenges to boost your skills

Start Practicing Python Now
EXL logo
EXL

Business Process Management / Analytics

New York

RecommendedJobs for You

Chennai, Tamil Nadu, India

Chennai, Tamil Nadu, India

Chennai, Tamil Nadu, India