Job Description: Azure Data Engineer Work Location: Hybrid Gurugram / Pune / Bangalore Experience: 5 to 8 years Apply now: aditya.rao@estrel.ai Include: Resume | CTC | ECTC | Notice (Only Immediate Joiners considere d) | LinkedIn URL Key Responsibilities: - Design, build, and maintain scalable data pipelines and solutions using Azure Data Engineering tools. - Work with large-scale datasets and develop efficient data processing architectures. - Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. - Implement data governance, security, and quality frameworks as part of the solution architecture. Technical Skills Required: - 4+ years of hands-on experience with Azure Data Engineering tools such as: - Event Hub, Azure Data Factory, Cosmos DB, Synapse, Azure SQL Database, Databricks, and Azure Data Explorer. - 3+ years of experience working with Python / PySpark, Spark, Scala, Hive, and Impala. - Strong SQL and coding skills. - Familiarity with additional Azure services like Azure Data Lake Analytics, U-SQL, and Azure SQL Data Warehouse. - Solid understanding of Modern Data Warehouse architectures, Lambda architecture, and data warehousing principles. Other Requirements: - Proficiency in scripting languages (e.g., Shell). - Strong analytical and organizational abilities. - Ability to work effectively both independently and in a team environment. - Experience working in Agile delivery models. - Awareness of software development best practices. - Excellent written and verbal communication skills. - Azure Data Engineer certification is a plus.
Job Title: AWS Data Engineer Experience Required: 5+ Years Interested? Send your resume to: aditya.rao@estrel.ai Kindly include: Updated Resume Current CTC Expected CTC Notice Period / Availability (Looking only for Immediate Joiner) LinkedIn Profile Job Overview: We are seeking a skilled and experienced Data Engineer with a minimum of 5 years of experience in Python-based data engineering solutions, real-time data processing, and AWS Cloud technologies. The ideal candidate will have hands-on expertise in designing, building, and maintaining scalable data pipelines, implementing best practices, and working within CI/CD environments. Key Responsibilities: Design and implement scalable and robust data pipelines using Python and frameworks like Pytest and PySpark . Work extensively with AWS cloud services such as AWS CDK , S3 , Lambda , DynamoDB , EventBridge , Kinesis , CloudWatch , AWS Glue , and Lake Formation . Implement data governance and data security protocols, including handling of sensitive data and encryption practices . Develop microservices and APIs using FastAPI , GraphQL , and Pydantic . Design and maintain solutions for real-time streaming and event-driven architecture . Follow SDLC best practices , ensuring code quality through TDD (Test-Driven Development) and robust documentation. Use GitLab for version control, and manage deployment pipelines with CI/CD . Collaborate with cross-functional teams to align data architecture and services with business objectives. Required Skills: Proficiency in Python v3.6+ Experience with Python frameworks: Pytest , PySpark Strong knowledge of AWS tools & services Experience with FastAPI , GraphQL , and Pydantic Expertise in real-time data processing , eventing , and microservices Good understanding of Data Governance , Security , and LakeFormation Familiarity with GitLab , CI/CD pipelines , and TDD Strong problem-solving and analytical skills Excellent communication and team collaboration skills Preferred Qualifications: AWS Certification(s) (e.g., AWS Certified Data Analytics Specialty, Solutions Architect) Experience with DataZone , data cataloging , or metadata management tools Experience in high-compliance industries (e.g., finance, healthcare) is a plus