Posted:1 week ago|
Platform:
Hybrid
Full Time
Data Engineer You will be working with agile cross functional software development teams developing cutting age software to solve a significant problem in the Provider Data Management space. This hire will have experience building large scale complex data systems involving multiple cross functional data sets and teams. The ideal candidate will be excited about working on new product development, is comfortable pushing the envelope and challenging the status quo, sets high standards for him/herself and the team, and works well with ambiguity. What you will do: Build data pipelines to assemble large, complex sets of data that meet non-functional and functional business requirements. Work closely with data architect, SMEs and other technology partners to develop & execute data architecture and product roadmap. Build analytical tools to utilize the data pipeline, providing actionable insight into key business performance including operational efficiency and business metrics. Work with stakeholders including the leadership, product, customer teams to support their data infrastructure needs while assisting with data-related technical issues. Act as a subject matter expert to other team members for technical guidance, solution design and best practices within the customer organization. Keep current on big data and data visualization technology trends, evaluate, work on proof-of-concept and make recommendations on cloud technologies. What you bring: 2+ years of data engineering experience working in partnership with large data sets (preferably terabyte scale) Experience in building data pipelines using any of the ETL tools such as Glue, ADF, Notebooks, Stored Procedures, SQL/Python constructs or similar. Deep experience working with industry standard RDBMS such Postgres, SQL Server, Oracle, MySQL etc. and any of the analytical cloud databases such as Big Query, Redshift, Snowflake or similar Advanced SQL expertise and solid programming experience with Python and/or Spark Experience working with orchestration tools such as Airflow and building complex dependency workflows. Experience, developing and implementing Data Warehouse or Data Lake Architectures, OLAP technologies, data modeling with star/snowflake-schemas to enable analytics & reporting. Great problem-solving capabilities, troubleshooting data issues and experience in stabilizing big data systems. Excellent communication and presentation skills as youll be regularly interacting with stakeholders and engineering leadership. Bachelors or master's in quantitative disciplines such as Computer Science, Computer Engineering, Analytics, Mathematics, Statistics, Information Systems, or other scientific fields. Bonus points: Hands-on deep experience with cloud data migration, and experience working with analytic platforms like Fabric, Databricks on the cloud. Certification in one of the cloud platforms (AWS/GCP/Azure) Experience or demonstrated understanding with real-time data streaming tools like Kafka, Kinesis or any similar tools.
Healthedge
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Bengaluru
7.0 - 13.0 Lacs P.A.
Kolkata, Gurugram, Bengaluru
9.5 - 19.5 Lacs P.A.
Hyderabad
16.0 - 22.5 Lacs P.A.
Hyderabad, Chennai, Bengaluru
7.0 - 17.0 Lacs P.A.
25.0 - 30.0 Lacs P.A.
9.0 - 18.0 Lacs P.A.
Bengaluru
13.0 - 17.0 Lacs P.A.
Bengaluru
30.0 - 35.0 Lacs P.A.
Bengaluru
12.0 - 16.0 Lacs P.A.
Kolkata, Mumbai, New Delhi, Hyderabad, Pune, Chennai, Bengaluru
25.0 - 30.0 Lacs P.A.