Posted:2 months ago| Platform:
Work from Office
Full Time
Build and maintain one or more data lakes to support scalable ingesting,manipulation, and reporting of data Manipulate data to produce and maintain new data elements using repeatable,automated processes. Design and implement custom Airflow operators to grow data ingestion capabilities. Demonstrates knowledge of industry trends, our infrastructure, technologies, tools,and systems. Experience in measuring and communicating the value of data platforms and tools Display sense of ownership over assigned work, requiring minimal direction and driving to completion in a sometimes fuzzy and uncharted environment. Build, operate and maintain highly scalable and reliable data pipelines Build Data warehouse solutions that provide end-to-end management and traceability of patient data, enable and optimize internal processes and product features. Implement processes and systems to monitor data quality, ensuring production data is always accurate and available for key stakeholders and business processes that depend on it. Build and develop tools to support the use of ML and other analytical models to improve understanding of patient behavior, provider prescribing, the patient experience on treatment, treatment patterns and more. Collaborate with internal stakeholders to develop business domain concepts and data modeling approaches to problems faced by the organization in the analytics arena. Maintain and optimize existing data platform services and capabilities to identify potential enhancements, performance improvements, design improvements. Writes & maintains unit/integration tests, systems documentation. Desired Skills and Experience: Masters or a bachelors degree in Computer Science, Information Systems, MIS, related field or equivalent work experience required. 5+ years of overall experience in building and sustaining data engineering and big data solutions, preferably in the healthcare industry Extremely strong skills with at least 3+ years experience in at-least one programming and scripting language (Python, Go, Java) Has built and deployed into production large-scale batch and real-time data pipelines using Airflow. Deep experience with GCS/AWS Big Data platform and services. (Snowflake,Big Query, S3, Google Buckets, Parquet/Avro/ORC, ES) Current Stack includes Airflow, Python, Big Query, Snowflake, Kafka, ES, Redis Overlap and experience with current stack is preferred and is a plus 3+ years experience in building enterprise data solutions using industry standard guiding principles and practices. 2+ years of working knowledge with relational/non-relational databases 1+ years of experience in data engineering model that follows DevOps principles and standards for CI/CD processes. Working knowledge of data privacy regulations and compliance requirements for handling PHI and PII data is a plus (e.g., HIPAA, GDPR, CCPA). Organizational Skills Required : Experience in Agile development methodologies, preferably both Scrum and Kanban. Ability to multi-task, prioritize assignments and work well under deadlines in a changing environment with cross functional agile teams. Experience with test driven development is a plus. Strong communications skills for working with stakeholders with various backgrounds.
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Bengaluru
3.0 - 3.0 Lacs P.A.
Bengaluru
10.0 - 17.0 Lacs P.A.
4.0 - 7.0 Lacs P.A.
Noida, Hyderabad, Pune
10.0 - 20.0 Lacs P.A.
Hyderabad, Chennai
15.0 - 25.0 Lacs P.A.
Pune, Maharashtra, India
Salary: Not disclosed
Hyderabad, Chennai, Bengaluru
20.0 - 25.0 Lacs P.A.
Hyderabad, Chennai, Bengaluru
15.0 - 30.0 Lacs P.A.
Bengaluru, Karnataka, India
Salary: Not disclosed
9.0 - 14.0 Lacs P.A.