Posted:2 days ago|
Platform:
On-site
Full Time
• AWS
• Pyspark, Python
• SQL
• DevOps – CI/CD
• Good Communication skills
• Technical Leadership: Demonstrate leadership, and ability to guide business and technology teams in adoption of best practices and standards
• Design & Development: Design, develop, and maintain robust, scalable, and high-performance data estate
• Architecture: Architect and design robust data solutions that meet business requirements & include scalability, performance, and security.
• Quality: Ensure the quality of deliverables through rigorous reviews, and adherence to standards.
• Agile Methodologies: Actively participate in agile processes, including planning, stand-ups, retrospectives, and backlog refinement.
• Collaboration: Work closely with system architects, data engineers, data scientists, data analysts, cloud engineers and other business stakeholders to determine optimal solution & architecture that is future-proof too.
• Innovation: Stay updated with the latest industry trends and technologies, and drive continuous improvement initiatives within the development team.
• Documentation: Create and maintain technical documentation, including design documents, and architectural user guides.
• Optimize data pipelines for performance and efficiency.
• Work with Databricks clusters and configuration management tools.
• Use appropriate tools in the cloud data lake development and deployment.
• Provide technical expertise and ownership in the diagnosis and resolution of issues.
• Ensure all cloud solutions exhibit a higher level of cost efficiency, performance, security, scalability, and reliability.
• Manage cloud data lake development and deployment on AWS /Databricks.
• Manage and create workspaces, configure cloud resources, view usage data, and manage account identities, settings, and subscriptions in Databricks.
• Experience & Proficiency with Databricks platform - Delta Lake storage, Spark (PySpark, Spark SQL). Must be well versed with Databricks Lakehouse, Unity Catalog concept and its implementation in enterprise environments.
• Familiarity of data design pattern - medallion architecture to organize data in a Lakehouse.
• Experience & Proficiency with AWS Data Services – S3, Glue, Athena, Redshift etc.
• Proficiency in SQL and experience with relational databases.
• Proficiency in at least one programming language (e.g., Python, Java) for data processing and scripting.
• Experience with DevOps practices - AWS DevOps for CI/CD, Terraform/CDK for infrastructure as code
• Good understanding of data principles, Cloud Data Lake design & development including data ingestion, data modeling and data distribution.
• Jira: Proficient in using Jira for managing projects and tracking progress.
• Strong communication and interpersonal skills.
• Collaborate with data stewards, data owners, and IT teams for effective implementation
• Understanding of business processes and terminology – preferably Logistics
• Experienced with Scrum and Agile Methodologies
• Bachelor’s degree in information technology or a related field. Equivalent experience may be considered.
• Overall experience of 8-12 years in Data Engineering
Aiprus Software Private Limited
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Practice Python coding challenges to boost your skills
Start Practicing Python NowSalary: Not disclosed
Salary: Not disclosed