Posted:2 weeks ago| Platform:
On-site
Full Time
Role: Azure Databricks Data Engineer Design You will design and support the business s database and table schemas for new and existing data sources for the Lakehouse. Create and support the ETL in order to facilitate the movement of data into the Lakehouse. Through your work you will ensure The data platform is scalable for large amounts of data ingestion and processing without service degradation. Processes are built for monitoring and optimizing performance Implement Chaos Engineering practices and measures that allow the endtoend infrastructure to function as expected, even if individual components fail Collaboration You will be collaborative - working closely with Product Owners, application engineers, and other data consumers within the business in an attempt to gather and deliver high quality data for businesscases. Work closely with other disciplines/departments and teams across the business in coming up with simple, functional, and elegant solutions that balance data needs across the business Analytics : You will play an analytical role in quickly and thoroughly analyzing business requirements and subsequently translating the emanating results into good technical data designs. Document the data solutions, develop, and maintain technical specification documentation for all reports and processes. Skills you MUST have 6+ years proven ability of professional Data Development experience 3+ years proven ability of developing with Azure Databricks or Hadoop/HDFS 3+ years of experience with PySpark/Spark 3+ years of experience with SQL 3+ years of experience developing with Python Full understanding of ETL concepts and Data Warehousing concepts Data modeling and query optimization skills, implementation experience of Data Vault, Star Schema and Medallion architecture Experience with CI/CD Experience with version control software Strong understanding of Agile Principles (Scrum) Experience with Azure Experience with Databricks Delta Tables, Delta Lake, Delta Live Tables Bonus Points For Experience In The Following Proficient with Relational Data Modeling Experience with Python Library Development Experience With Structured Streaming (Spark Or Otherwise) Experience with Kafka and/or Azure Event Hub Experience with GitHub SaaS / GitHub Actions Experience with Snowflake/ Exposure to BI Tooling (Tableau, Power BI, Cognos, etc. You will design and support the business s database and table schemas for new and existing data sources for the Lakehouse Create and support the ETL in order to facilitate the movement of data into the Lakehouse. Work closely with other disciplines/departments and teams across the business in coming up with simple, functional, and elegant solutions that balance data needs across the business Mandatory Skills Azure Databricks Pyspark Azure Data Factory Show more Show less
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Andhra Pradesh, India
Experience: Not specified
Salary: Not disclosed
Andhra Pradesh, India
Salary: Not disclosed
Bangalore Rural
6.0 - 12.0 Lacs P.A.
6.0 - 12.0 Lacs P.A.
6.0 - 12.0 Lacs P.A.
Hyderabad
30.0 - 40.0 Lacs P.A.
Pune, Bengaluru, Hyderabad
5.0 - 15.0 Lacs P.A.
Hyderabad
12.0 - 15.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
Hyderabad, Pune, Bengaluru
5.0 - 15.0 Lacs P.A.