Job
Description
Project Role :Custom Software Engineer
Project Role Description :Develop custom software solutions to design, code, and enhance components across systems or applications. Use modern frameworks and agile practices to deliver scalable, high-performing solutions tailored to specific business needs.
Must have skills :PySpark
Good to have skills :NA
Minimum 5 year(s) of experience is required
Educational Qualification :15 years full time education
Summary:At least 5 years of experience in data engineering or a related field. We are seeking a highly skilled and experienced Data Engineer to develop solutions using PySpark and data warehousing technologies. This role requires expertise in both development and design, with the ability to guide a team and architect scalable, efficient data solutions. Roles & Responsibilities:-Design and Implementation of Docker and Kubernetes and ETL Processes Using PYSpark:Design, develop, and optimize ETL pipelines for data ingestion, transformation, and loading into the data warehouse. -Design and Maintain Data Warehouse Solutions:Develop and maintain data warehouse schemas, tables, and views to support analytics and reporting needs-Provide Technical Guidance and Mentorship to Junior Developers:Guide and support team members in their technical development, ensuring adherence to best practices and coding standards-Collaborate with Stakeholders to Understand Data Requirements:Work closely with business users and other stakeholders to understand their needs and ensure data solutions meet their requirements-Troubleshoot and Resolve Issues:Diagnose and resolve issues related to data processing, data quality, and data warehouse performance-Contribute to the Development of Data Governance Policies:Help define and implement data governance policies to ensure data quality, security, and compliance
Professional & Technical Skills:At least 5 years of experience in data engineering or a related field.-Extensive Experience with PySpark:Proficiency in Python programming, Spark architecture, Spark SQL, and Spark DataFrames, Docker and Kubernetes-Strong Knowledge of Data Warehousing Principles:Experience with data modeling, data warehousing architectures, and common data warehouse platforms (e.g., Snowflake, Redshift, BigQuery)-SQL Proficiency:Strong SQL skills, including experience with relational databases and data modeling-Experience with Cloud Platforms:Familiarity with cloud data warehousing services (e.g., AWS, Azure, Google Cloud)-Proven Ability to Deliver Data Solutions:Experience in designing, implementing, and maintaining data solutions
Additional Information:
This position is based at our Pune office.A 15 years full time education is required.
Qualification15 years full time education