Posted:3 days ago|
Platform:
On-site
Full Time
At EXL, we go beyond capabilities to focus on collaboration and character, tailoring solutions to your unique needs, culture, goals, and technology environments. We specialize in transformation, data science, and change management to enhance efficiency, improve customer relationships, and drive revenue growth. Our expertise in analytics, digital interventions, and operations management helps you outperform the competition with sustainable models at scale. As your business evolution partner, we optimize data leverage for better business decisions and intelligence-driven operations. For more information, visit www.exlservice.com.
Job Title - Data Engineer - PySpark, Python, SQL, Git, AWS Services – Glue, Lambda, Step Functions, S3, Athena.
We are seeking a talented Data Engineer with expertise in PySpark, Python, SQL, Git, and AWS to join our dynamic team. The ideal candidate will have a strong background in data engineering, data processing, and cloud technologies. You will play a crucial role in designing, developing, and maintaining our data infrastructure to support our analytics
1. Develop and maintain ETL pipelines using PySpark and AWS Glue to process and transform large volumes of data efficiently.
2. Collaborate with analysts to understand data requirements and ensure data availability and quality.
3. Write and optimize SQL queries for data extraction, transformation, and loading.
4. Utilize Git for version control, ensuring proper documentation and tracking of code changes.
5. Design, implement, and manage scalable data lakes on AWS, including S3, or other relevant services for efficient data storage and retrieval.
6. Develop and optimize high-performance, scalable databases using Amazon DynamoDB.
7. Proficiency in Amazon QuickSight for creating interactive dashboards and data visualizations.
8. Automate workflows using AWS Cloud services like event bridge, step functions.
9. Monitor and optimize data processing workflows for performance and scalability.
10. Troubleshoot data-related issues and provide timely resolution.
11. Stay up-to-date with industry best practices and emerging technologies in data engineering.
2. Strong proficiency in PySpark and Python for data processing and analysis.
3. Proficiency in SQL for data manipulation and querying.
4. Experience with version control systems, preferably Git.
7. Excellent problem-solving skills and attention to detail.
8. Strong communication and collaboration skills to work effectively within a team.
9. Ability to manage multiple tasks and prioritize effectively in a fast-paced environment.
2. Familiarity with big data technologies like Hadoop and Spark.
3. AWS certifications related to data engineering.
EXL
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowIndia
Salary: Not disclosed
pune, maharashtra
Salary: Not disclosed
Ahmedabad, Gujarat, India
Salary: Not disclosed
13.0 - 20.0 Lacs P.A.
Pune, Maharashtra, India
Salary: Not disclosed
0.00014 - 0.0002 Lacs P.A.
1.0 - 5.0 Lacs P.A.
Chennai, Tamil Nadu, India
Experience: Not specified
Salary: Not disclosed
Chennai, Tamil Nadu, India
Experience: Not specified
Salary: Not disclosed
Chennai, Tamil Nadu, India
Experience: Not specified
Salary: Not disclosed