Work from Office
Full Time
Design, develop, and maintain data pipelines and ETL workflows using PySpark and SQL.
Work with large-scale datasets to ensure data accuracy, consistency, and quality.
Implement and manage data solutions on AWS Cloud services (e.g., S3, Glue, EMR, Redshift, Lambda).
Collaborate with Business Analysts and business stakeholders to understand requirements and deliver robust solutions.
Optimize queries and data processes for performance and scalability.
Troubleshoot and resolve issues in existing data workflows and pipelines.
Proven 4+ years of experience in following:
Building production-grade data pipelines using PySpark.
Strong proficiency in SQL (query optimization, data modeling, performance tuning).
Hands-on experience with AWS cloud services for data engineering.
Solid understanding of data warehousing concepts and experience with tools like Redshift, Snowflake, or similar (nice to have).
Experience with version control (Git), CI/CD, and Agile methodologies.
Strong problem-solving skills and ability to work independently as well as in a team
Exposure to pharma / healthcare domain
Axtria
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
noida
8.0 - 13.0 Lacs P.A.
noida, uttar pradesh, india
Salary: Not disclosed
hyderabad
8.0 - 13.0 Lacs P.A.
noida
Experience: Not specified
8.0 - 13.0 Lacs P.A.
noida, uttar pradesh, india
Experience: Not specified
Salary: Not disclosed
hyderabad, telangana, india
Salary: Not disclosed
noida
8.0 - 13.0 Lacs P.A.
hyderabad
8.0 - 13.0 Lacs P.A.
bengaluru
8.0 - 13.0 Lacs P.A.
noida
8.0 - 13.0 Lacs P.A.