Job Requirements Role Summary: ConcertAI is a fast-growing healthcare research organization, leading the market in oncology healthcare data analytics. Our dynamic, fun, and highly experienced team is looking for a Statistical Analyst to join us. As a Statistical Analyst in the Real-World Evidence (RWE) business unit, you will be responsible for statistical analyses conducted using ConcertAI’s industry-leading and cutting-edge healthcare data resources. Our team operates in a cross-functional environment with representatives from other functions such as our Epidemiology/HEOR, Data Curation, Data Products, and Data Science teams As a Statistical Analyst on our team, you will be reporting to a Manager on the Biostatistics team and will provide high quality analyses and summaries for studies supporting regulatory submissions and health economics and outcomes research (HEOR) studies. Your responsibilities will involve preparing, linking, and manipulating data as well as performing statistical analyses for research projects dedicated to improving our understanding of the patient journey and treatment outcomes in the oncology space, as well as making meaningful impacts on patients' lives. Statistical Analysts will also participate in data quality control and review results. The Statistical Analyst will contribute to, and support, corporate goals to progress the company’s portfolio of products. Responsibilities Collaborates with Project Managers, Principal Investigators, and other scientific staff to design appropriate study analyses based on project scope and client objectives. Reviews and revises study protocols for accuracy, consistency, thoroughness, and quality of statistical methods and presentation. Drafts and reviews Statistical Analysis Plans (SAPs) to define study cohort eligibility criteria, study measures, and statistical methodologies. Creates data structures by determining patient or disease cohorts, establishing study samples, and structuring data files according to research objectives and study design. Prepares analysis-ready data by loading, extracting, and transforming data across several databases, as well as searching in schemas, cleaning outbound files, and merging data tables. Executes quality control checks of data for anomalies, frequency, and distribution of data points for accuracy and consistency; determines root causes of errors, recommends solutions, and resolves data issues through queries and programming scripts. Performs statistical analyses in accordance with SAPs and generates analytic reports, tables, graphics, and slides. Contributes to methodology and results sections of study deliverables such as protocols, summary reports, abstracts, and manuscripts to ensure accuracy of the programming and statistical descriptions. Interfaces with Scientific Management and Data Curation team to clarify data requests, extract data sets, and review case report forms, as well as the Data Operations team to assemble and clean data sets. Joins client meetings and contributes to the discussion of findings as the statistical lead on assigned projects. Manages task timelines and communicates status updates with project team members regarding project requirements, deadlines, and priorities. Follows company policy and procedures regarding quality control, data security, and the ethical conduct of research involving human subjects, as well as the provisions of the HIPAA security and privacy rules. Participates in other projects as assigned including statistical support roles and contributing to internal initiatives. Work Experience Requirements Master’s degree and up to two years of related programming and statistical experience, or Bachelor’s degree and up to five years of related programming and statistical experience with an area of study in quantitative science such as Statistics, Biostatistics, Analytics, Biometrics, Operations Research, Engineering, or Data Science. Background in scientific research study design and methodology, data analysis, and statistical programming using patient-level datasets. Expertise in SQL is required Expertise with R is required Experience using healthcare data, such as claims or electronic medical records, or patient-reported outcomes is required Experience with GitHub is preferred Experience applying statistical methodologies and advanced mathematical concepts such as ANOVA, linear regression, mixed models, time-to-event analyses, correlation analysis, sampling theory, analysis of categorical data, and appropriate transformations and permutations. Experience integrating and processing complex data (e.g., extracting, transforming, loading, scrubbing). Ability to proactively collaborate on multiple projects and deadlines, establish priorities for work activity, and solve practical problems. Exceptional verbal and written communication skills with a proven ability to clearly and convincingly present information to a wide range of internal and external audiences. Aptitude for understanding and applying best practices from documents such as safety rules, operating and maintenance instructions, procedure manuals, and correspondence. Familiarity with basic productivity software (e.g., Microsoft Excel, Microsoft Word, Web Conferencing Applications). Detail-oriented, highly motivated, results-driven, and flexible to work in a scaling environment. Particular consideration will be given to applicants with the following qualifications Research history within the oncology space related to one or more specific solid tumor types, or to hematological malignancies. Working knowledge of external control arms or other use cases of real-world evidence to support regulatory decision-making. Understanding of FDA regulatory requirements, ICH guidelines, and GCP. Publication track record preferred. Show more Show less
Job Requirements Job Title: Senior Software Engineer - Backend Role Summary We are looking for a strong hands-on technology leader who will build and enhance our next generation data analytics product, processing large volume of real-world healthcare datasets to provide real world insights and optimization of clinical trial design using advance analytics capabilities. The ideal candidate will serve as the technical point person, working closely with product managers, data scientists and development team to advance the product technology and business roadmap. Responsibilities Ensures the implementation of agreed architecture and infrastructure. Design and develop production-ready APIs, services and algorithms built to scale, high performing and compliant to security standards. Drive and uphold high engineering standards and practices, bringing consistency to the codebases you encounter and ensuring software is adequately reviewed, tested, and integrated. Work closely with product management, engineering, dev-ops, data scientist and data engineering team to build, support and deliver new product releases Drive optimization, unit testing and tooling to improve quality of solutions Evaluates and selects appropriate tech stack and suggests integration methods & business impact. Resolves project challenges involving scope, design and technical problems when they arise. Addresses technical concerns, ideas, and suggestions. Requirements Strong SQL and Python programming skills Expertise in designing and developing scalable and high performing micro-services-based architecture for data analytics products/application involving large volume of data processing and embedded ML models. Experience in using distributed computing, optimization techniques and multiprocessing design principals using Python Experience building Search Analytics preferably through Elastic Search. Problem-solving aptitude, with a willingness to work in a fast-paced product development environment and hands-on mentality to do whatever it takes to deliver a successful product. Experience in web application development and deployment. Proficiency with APIs, containerization and orchestration using cloud technologies in AWS is a plus Experience with advanced analytics and modern machine learning techniques is a plus Experience with NoSQL and data processing with pyspark is a plus Experience with Docker for deployment and local development is a plus Qualification Degree in computer science, information technology, or related field. Minimum of 3 to 10 years of hands-on experience in building and productionizing high performing and scalable services involving high volume of data processing within a software product development environment, preferably in life science industry Self-motivated with a passion for learning, analyzing technology tradeoffs, and shipping product Show more Show less
Job Requirements Work with a team to develop advanced analytic techniques to interrogate, visualize, interpret, and contextualize data and develop novel solutions to healthcare specific problems Implement a variety of analytics from data processing & QA to exploratory analytics and complex predictive models Understand client / product needs and translate them into tactical initiatives with defined goals and timelines Implement models using high level software packages (SKlearn, TensorFlow, PySpark, Databricks) Collaborate on software projects, providing analytical guidance and contributing to codebase Devises modeling and measuring techniques, and utilizes mathematics, statistical methods, engineering methods, operational mathematics techniques (linear programming, game theory, probability theory, symbolic language, etc.) and other principles and laws of scientific and economic disciplines to investigate complex issues, identify, and solve problems, and aid better decision making Plans, designs, coordinates and controls the progress of project work to meet client objectives; prepares and presents reports to clients Solves highly specialized technical objectives or problems without a pre-defined approach where the use of creative, imaginative solutions is required Synthesize raw data into digestible and actionable information. Identify specific research areas that merit investigation, develop new hypotheses and approaches for studies and evaluate the feasibility of such endeavors. Initiate, formulate, plan, execute and controls studies, which are designed for the purpose of identifying, analyzing, and reporting on healthcare related issues. Advise management on the selection of an appropriate study design, analysis, and in interpretation of study results. Work Experience BS/ MS in mathematics, physics, statistics, engineering, or similar discipline. Ph.D. preferred. Minimum of 5 years analytics/ Datascience experience Solid experience writing SQL queries Strong programming abilities Python (pandas, sklearn, numpy/ scipy, pyspark) Knowledge of statistical methods- regression, ANOVA, EDA, PCA, etc. Basic visualization skills- matplotlib/seaborn/plotly/etc. PowerBI experience highly preferred. Experience manipulating data sets through commercial and open source software (e.g. Redshift, Snowflake, Spark, Python, R, Databricks) Working knowledge of medical claims data (ICD-10 codes, HCPCS, CPT, etc.) Experience utilizing a range of analytics involving standard data in the Pharmaceutical Industry e.g. claims data from Symphony, IQVIA, Truven, Allscripts, etc. Must be comfortable conversing with the end-users Excellent analytical, verbal and communication skills Ability to thrive in a fast-paced, innovative environment Advanced Excel skills including (v-look-up, pivot tables, charts, graphing , and macros) Excellent documentation skills Excellent planning, organizational, and time management skills Ability to lead meetings and give presentation. Show more Show less
Job Requirements Looking for energetic, self-motivated and exceptional Data Engineer to work on extraordinary enterprise products based on AI and Big Data engineering. He/she will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers, Integration Specialists and UX developers. Work Experience Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Strong analytic skills related to working with unstructured datasets. Experience supporting and working with cross-functional teams in a dynamic environment. 6+ years of experience in a Data Engineer role, who has attained a Graduate degree in Computer Science, Statistics, Information Systems or another quantitative field. Should have experience using the following software/tools: Experience with relational SQL and NoSQL databases, including Postgres and RDS- MSSQL. Experience in PySpark programming.
Job Requirements Responsibilities Primary owner of testing activities from planning through test execution Maintain and improve mobile and web test suites for patient-facing applications. Work closely with all members of the Product and DevOps teams – designers, technical architects, integration Analyst, AWS engineers, site reliability engineers, analysts, Digital and operational leadership - to execute on the product vision. Maintain high standards of software quality within the team by establishing good practices and habits. From understanding of new product requirements through acceptance criteria and wireframes, create new test cases and integrate them into existing regression suites. Create a high visibility traceability matrix for new feature development. Assist the product teams with issue management and quality trends reporting. Assist with the creation of reports, manuals, and other documentation on the operation and maintenance of products. Assist with the analysis and resolution of application problems. Assess opportunities for application and process improvement. Provide second-level support for issue investigation and resolution. Serve as the advocate for the user experience. Own and drive the frontend and backend automation of our application platform, considering future product roadmap The production, test, and product release activities such as document/process creation, review and approval of batch records and associated documents. Also, product release and transactions. Drive and uphold high engineering standards and practices, bringing consistency to the codebases you encounter and ensuring software is adequately reviewed, tested, and integrated. Requirements: 5+ years minimum of Experience as a software Automation Quality Assurance analyst. 5 years minimum Experience with a Test automation tool. Experience with automation tools like Selenium or MABL would be a huge plus. 5 years minimum Experience with a Test Management tool. 5 years minimum Experience with Agile SDLC. Experience creating and managing a product Traceability Matrix. Experience with Jira for issue documentation and management. Experience with test driven development. Experience with various testing strategies and methodologies and strong understanding of when to apply those different strategies. Strong judgment, with the ability to distill and communicate the true impacts of issues. Strong attention to detail. Analytical mindset with an interest in how digital solutions work. Ability to break complex and complicated issues down to root causes. Comfortable articulating and fighting for application quality during high-pressure release cycles. Understanding data standardization practices in pharma domain, integrating Healthcare data, and security related to HIPAA is a plus. Experience with Medidata EDC would be a huge plus Qualification Degree in computer science, information technology, or related field. Minimum of 5+ years of hands-on experience in test qualification and productionizing high performing and scalable services involving high volume of data processing within a software product development environment, preferably in life science industry Self-motivated with a passion for learning, analyzing technology tradeoffs, and shipping product
Job Requirements Job Title: Software Engineer - LLM About ConcertAI ConcertAI is the leading provider of precision oncology solutions for biopharma and healthcare, leveraging the largest collection of research-grade Real-world Data and the only broadly deployed oncology-specific AI solutions. Our mission is to improve translational sciences; accelerate therapeutic clinical development; and provide new capabilities for post-approval studies to accelerate needed new medical innovations to patients and to improve patient outcomes. ConcertAI has emerged as one of the highest growth technology companies in Real-world Data and AI, backed by industry leading private equity companies: SymphonyAI, Declaration Partners, Maverick Ventures, and Alliance|Bernstein. Role Summary ConcertAI is seeking a hands-on engineer to build and enhance its next-generation Clinical AI agentic platform and analytics products, processing large volumes of real-world healthcare data to generate insights and optimize clinical trial design. The ideal candidate should have experience with LLMs, RAG, and tools like LangChain and LangGraph for developing interactive agents. Serving as the technical point person, they will collaborate closely with product managers, data scientists, and the development team to drive the product's technology and business roadmap forward. Responsibilities In this role, you'll help develop and maintain the conversation layer using a combination of software and prompting skills to enable AI-driven clinical conversations. Design complex prompts within the conversation planning layer to support advanced agentic model workflows. Collaborate with subject matter experts to define standard outcomes and continuously refine prompt designs for maximum efficiency and effectiveness Write, configure, and refine prompts to facilitate engaging, oncology curator-oriented interactions. Apply advanced prompting techniques to enhance language model performance, ensuring clinical safety and an improved trial experience. Conduct experiments and analyze model outputs to refine and optimize prompt strategies. Design and develop scalable, high-performing, and security-compliant APIs, services, and algorithms for production use. Maintain high engineering standards by ensuring code consistency, thorough testing, and seamless integration across software components. Qualification Bachelor’s degree in computer science, Computer Engineering, or a related field. Proficiency in Python and SQL, with hands-on experience using interactive notebooks (e.g., Jupyter). Proven experience in machine learning research and development, particularly with LLMs. Professional experience with LLM prompting (e.g., ChatGPT, Claude, LLaMA). Experience with LangChain and LangGraph. Experience in machine learning R&D, particularly in designing, testing, and optimizing LLM-based solutions. Ability to assess model outputs, refine prompt designs, and improve overall system performance. Exposure to working with databases and building simple RESTful APIs. Knowledge of modern web frameworks (e.g., Flask, Django, or Spring Boot). Strong analytical skills and problem-solving mindset. Eagerness to learn new technologies and adapt in a fast-paced environment. Understanding of healthcare workflows and regulatory requirements. Awareness of data privacy and security best practices, especially in regulated environments.
Job Requirements Detailed Product Requirements Definition (PRD) Product release plan Ability to work with B2B and partner ecosystem. Get requirements structurally. Uncover and understand internal and external user needs and translate them into requirements for curation technology product development To query the ConcertAI database as per the requirements of different CancerLinQ (CLQ) products To support in the smooth rollout of different CLQ products by understanding the requirements, creating synthetic data, implementation, and validation To identify opportunities for process improvements on informatics- and product-related functional tasks that could lead to efficiency and scalability Closely work with internal teams, including developers, engineers, architects, quality assurance analysts, data scientists, and curation experts Develop and track timelines, long term roadmaps, and owners for product development and successful implementation delivery goals and improved processes Identify and document implementation risks and dependencies for internal and external timelines and successfully communicate in lay terms to key stakeholders Ensure requirements are fully understood by ICs and that implementation plans match expectations across projects and teams Define detailed technical data requirements within an agile development framework. Create and manage backlog for multiple workstreams in Jira, requirements in Confluence, running planning meetings with various teams and ICs to prioritize and assign tasks Understand, research, and follow trends in the industry and in general to identify and suggest areas for platform growth and development Monitor and contribute to the development of trainings, documentation, and communication for internal knowledge sharing across data products and pipelines – specifically for curation team / users of platform Identify and lead the implementation of process improvements across workstreams, including leading retrospective discussions and proactive brainstorming sessions for individual and team growth Lead in generation and maintenance of release notes Completion of other assigned tasks Work Experience Requirements Bachelor’s degree required; Master's degree strongly preferred Proven experience in building products with healthcare technology, oncology preferred Strong grasp of SQL or other coding languages Experience using healthcare data, such as claims or electronic medical records, or patient-reported outcomes is required Experience integrating and processing complex data (e.g., extracting, transforming, loading, scrubbing). Strong understanding of the emerging trends in Generative AI Growth mind-set of diving into new areas and quickly developing a strong point of view Problem solving approach and a can-do attitude Strategic, critical but creative thinker with strong business sense Ability to work within cross-functional team environments Comfortable with Jira, Confluence, and other collaboration platforms Familiarity with agile development and concepts Experience and comfort in learning new data processing, data visualization & analytic tools Enjoys staking out new areas of development, ideating new applications and solutions down to details based on internal, user & client feedback Strong cross-functional skills & collaborative mindset Exceptional communication and listening skills and strong attention to detail Proven ability to effectively manage multiple complex projects with competing priorities across an organization Experience both shaping and improving existing processes while navigating multiple stakeholder input with high EQ and thoughtfulness Leveling based on experience and background Preferred Requirements Skilled in design thinking and UX/UI best practices Expertise in building decision frameworks that help make objective investment calls Experience defining epics and writing user stories
Job Requirements Role Summary We are looking for energetic, self-motivated, and exceptional Database/ETL Tester to work on extraordinary enterprise products based on AI and Big Data engineering. He will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Experience Level: 5+ years Responsibilities Responsible for the testing of SQL server Transactions processing & Business Intelligence solutions in an Agile Development environment. Ensure data is consistent and accurate across all layers. Independently investigate data anomalies or inaccuracy patterns to solve data problems to ensure that all data is up to date. Develop and maintain test plans and create reusable test cases. Perform data analysis and creation of test data. Perform manual and automation execution of test scripts. Analyze and document defects found during test execution using JIRA. Assist with writing and performing data audits. Work Experience Requirements Should be hands-on in testing database elements modeled with SQL Server. Experience in Azure environment (Azure SQL, ADF, Blob etc.) is preferred Strong in validating data completeness and correctness between the data feeds and data lake. Should be hands-on in DB testing with SQL scripting experience. Strong knowledge in ETL processes. Exposure to AWS is a plus (S3, RDS-MSSQL, PostgreSQL etc.)
Job Requirements Role Summary We are looking for energetic, self-motivated, and exceptional Database/ETL Tester to work on extraordinary enterprise products based on AI and Big Data engineering. He will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Experience Level: 5+ years Responsibilities Responsible for the testing of SQL server Transactions processing & Business Intelligence solutions in an Agile Development environment. Ensure data is consistent and accurate across all layers. Independently investigate data anomalies or inaccuracy patterns to solve data problems to ensure that all data is up to date. Develop and maintain test plans and create reusable test cases. Perform data analysis and creation of test data. Perform manual and automation execution of test scripts. Analyze and document defects found during test execution using JIRA. Assist with writing and performing data audits. Work Experience Requirements Should be hands-on in testing database elements modeled with SQL Server. Experience in Azure environment (Azure SQL, ADF, Blob etc.) is preferred Strong in validating data completeness and correctness between the data feeds and data lake. Should be hands-on in DB testing with SQL scripting experience. Strong knowledge in ETL processes. Exposure to AWS is a plus (S3, RDS-MSSQL, PostgreSQL etc.)
Job Requirements Job Title Sr DevOps Engineer About Concerto HealthAI Concerto HealthAI is a leading provider of precision health clinical and commercial real-world data and AI solutions to the global pharma community and across the broader healthcare landscape. Our mission is to accelerate drug development, clinical trials, and HEOR analytics in order to dramatically improve patient outcomes. Concerto HealthAI is part of a select set of companies backed by SymphonyAI Group (SAI) the largest AI-focused investment firm in the world, which was founded and is led by Dr. Romesh Wadhwani, a highly successful serial tech entrepreneur, investor, and philanthropist. Sr DevOps engineer is responsible for the overall performance and availability of the Eureka Platform. The position is responsible for Availability, Performance and User Experience, Resource usage, Security and monitoring of ConcertAIs production environments globally. The position creates functional strategies and specific objectives for the sub-function and develops budgets/policies/procedures to support the functional infrastructure. Responsibilities Design, build and implement enterprise class cloud solution for a Development, Staging and production environment - Identify design gaps in existing and proposed architectures and recommend changes or enhancements. Contribute collaborate, and Improve with other Cloud team members across the broad spectrum security programs, such as Threat & Vulnerability Management, Security Incident Response, Data Protection, SOC engineering Research next generation tools and technologies. Work with other teams to operationalize to enhance the effectiveness of preventive and detective security controls. Build standards, patterns, and tools that help engineers in other squads make effective use of infrastructure Consult with other squads in a Site Reliability Engineer capacity to help them make use of infrastructure effectively Configure and manage our software defined networking capabilities including VPCs, firewalls, and routing Configure and maintain core shared services like Kubernetes clusters, databases, and CI/CD systems Requirements 5+ years- AWS DevOps experience - Experience with enterprise cloud architecture and working as part of a cross-functional team to implement solutions Must to have - Hands-on various AWS technologies like EKS, ECS fargate, cluster setup and deployments, load balancers, EC2 instances, RedShift and RDS databases, Secret Manager etc Must have worked cross account configurations for S3 and ECR access along with configuring Elastic search services and building lambdas. Hands on decent experience with Python3 and database schema access. Terraform and ansible based deployment and configuration automation Thorough understanding of Kubernetes, Docker, images repositories Prior experience with a Cloud-based SaaS product is required Experience with assessment, development, implementation, optimization, and documentation of a comprehensive and broad set of security technologies and processes (Data protection, cryptography, key management, identity and access management (IAM), and network security) within SaaS, PaaS, IaaS and other cloud environments. Experience with deployment orchestration, automation, and security configuration management (Jenkins, Puppet, Chef, etc.) preferred. Familiarity with Cloud security, alert management options like NSG configuration, Security Alerts, logging like Azure OMS etc Experience architecting solutions within Amazon Web Services (AWS) based cloud environment and other virtualized and software defined system and network platforms Significant experience in developing and deploying SIEM (Splunk), IAM, PKI, IDS/IPS, Host Monitoring Familiarity and experience with standards frameworks - ISO, NIST, ITIL, PCI, HIPAA, EU GDPR etc. Learn More About Concerto HealthAI Concerto HealthAI is transforming how healthcare is delivered and dedicated to improving patient outcomes in oncology by offering innovative solutions on how data and intelligence is used to solve healthcare problems. We are creating something special in our culture, by building a collaborative, engaged, patient focused, team approach to our mission. Our high-performance teams are looking to add great talent to the mix and we are hiring for the right mix of new skills and diverse mindset. Learn more about Concerto HealthAI at www.concertohealthai.com , or follow us on LinkedIn. Show more Show less
Job Requirements Be an expert on data standards and clinical terminologies Ability to understand the Inclusion and Exclusion criteria of a clinical trial Updating and maintenance of knowledge graph Validation of the precision360 database in terms of data quality, fill rate and data standardization Clinical validation of Precision360 product suites Collaborate with data engineers, NLP data scientists, and pipeline team Support innovation of the clinical trial optimization and digital trial solution software suite. Work Experience Clinical knowledge, particularly in Oncology, and understanding of clinical workflows Extensive experience with standard vocabularies like SNOMED, LOINC, ICDO3, ICD10, NCIT, UMLS, and others including an understanding 2+ years’ experience with clinical data, electronic medical record data, health data standards & terminologies. Basic understanding with inclusion and exclusion criteria in a study protocol with the ability to translate it into codable concepts. Database experience with relational databases or AWS services. Strong communication, project management and technical leadership skills with an enthusiasm for working in an interdisciplinary environment. Prefer candidates with some prior technical experience in RWD, EMR, databases, SQL etc
Job Requirements Work on a day-to-day basis with ConcertAI Data Operations, Informatics, and Software Engineering Teams to continuously monitor and improve data quality, completeness, and usability. Develop and adapt quality-based reporting metrics to measure data quality and robustness within and across all ConcertAI data partners. Experience working in the Healthcare domain Should have worked on Healthcare data in the past, preferably EMR/EHR and/or claims data Should be able to verify incoming EHR data from different Data Providers every month: Checking data quality Checking Data Integrity Statistically validate incoming data before it gets loaded to the database Statistically validate incoming data after it gets loaded to the database Knowledge of basic Statistics for performing data validation explained above Work Experience Experience working with or inside Life Sciences and/or Pharma on real-world data analytics projects Working knowledge of medical code sets, such as HCPCS, ICD, CPT, SNOMED, LOINC, and NDC Demonstrated ability to work with a wide variety of data structures, coding schemes, and data sources. Working knowledge and experience with databases (SQL preferred) and statistical modeling tools, such as SAS, S, or R. Experience: 3+ years
Job Requirements Our engineering team is looking for an Data Enginer who is very proficient in python, has a very good understanding of AWS cloud computing, ETL Pipelines and a demonstrated proficiency with SQL and relational database concepts. In this role you will be a very mid to senior-level individual contributor guiding our migration efforts by serving as a Senior data engineer working closely with the Data architects to evaluate best-fit solutions and processes for our team. You will work with the rest of the team as we move away from legacy tech and introduce new tools and ETL pipeline solutions . You will collaborate with subject matter experts, data architects , informaticists and data scientistss to evolve our current cloud based ETL to the next Generation . Responsibilities Independently prototypes/develops data solutions of high complexity to meet the needs of the organization and business customers. Designs proof-of-concept solutions utilizing an advanced understanding of multiple coding languages to meet technical and business requirements, with an ability to perform iterative solution testing to ensure specifications are met. Designs and develops data solutions that enables effective self-service data consumption, and can describe their value to the customer. Collaborates with stakeholders in defining metrics that are impactful to the business. Prioritize efforts based on customer value. Has an in-depth understanding of Agile techniques. Can set expectations for deliverables of high complexity. Can assist in the creation of roadmaps for data solutions. Can turn vague ideas or problems into a data product solutions. Influences strategic thinking across the team and the broader organization. Maintains proof-of-concepts and prototype data solutions, and handles any assessment of their viability and scalability, with own team or in partnership with IT. Working with IT, assists in building robust systems focusing on long-term and ongoing maintenance and support. Ensures data solutions include deliverables required to achieve high quality data. Displays a strong understanding of complex multi-tier, multi-platform systems, and applies principles of metadata, lineage, business definitions, compliance, and data security to project work. Has an in-depth understanding of Business Intelligence tools, including visualization and user experience techniques. Can set expectations for deliverables of high complexity. Works with IT to help scale prototypes. Demonstrates a comprehensive understanding of new technologies as needed to progress initiatives. Requirements Work Experience Expertise in Python programming, with demonstrated real-world experience building out data tools in a Python environment Expertise in AWS Services , with demonstrated real-world experience building out data tools in a Python environment Bachelor`s Degree in Computer Science, Computer Engineering, or related discipline preferred. Master`s in same or related disciplines strongly preferred. 3+ years’ experience in coding for data management, data warehousing, or other data environments, including, but not limited to, Python and Spark. Experience with SAS is preferred. 2+ years’ experience as developer working in an AWS cloud computing environment. 2+ years’ experience using GIT or Bitbucket. Experience with Redshift, RDS, DynomoDB is preferred Strong written and oral communication skills required. Experience in healthcare industry with healthcare data analytics products Experience with healthcare vocabulary and data standards (OMOP, FHIR) is a plus
Job Requirements Key Responsibilities Governance and harmonisation Define and establish best lean agile practices driving value addition , team efficiency, collaboration and visibility across teams and product Establish common tools and usage to drive the software and data release process across multiple teams from backlog management to value based prioritization to sprint execution to deployments to release closure and documentation generation. Drive scrum of scrum for overall SaaS and data platform. Agile Delivery Leadership Champion Scrum, Kanban, and SAFe principles while adapting them to suit product maturity and regulatory requirements. Facilitate sprint planning, backlog refinement, daily stand-ups, sprint reviews, and retrospectives with technical and business stakeholders. Drive incremental delivery and ensure teams deliver working, high-quality software on predictable timelines. Remove delivery blockers proactively, from technical dependencies to cross-team alignment issues. Technical Engagement Understand system architecture, AI model lifecycle, MLOps pipelines, and data flows to better facilitate technical discussions. Collaborate with Tech Leads, Data Scientists, ML Engineers, and DevOps teams to ensure sprint commitments are technically achievable. Track code quality, automated test coverage, CI/CD health, and cloud infrastructure readiness as part of delivery metrics. Ensure data privacy, security, and regulatory compliance are integrated into delivery workflows. Technical Stakeholder & Product Alignment Partner with Product Owners, Clinical SMEs, Data Science, Data Engineering and Customer Success to ensure backlog priorities align with business outcomes and value driven Balance innovation speed with healthcare/pharma compliance requirements (HIPAA, GDPR, FDA, GxP). Ensure transparent and regular communication of progress, risks, and dependencies to leadership and stakeholders. Metrics and Continuous Improvement Establish and monitor KPIs such as sprint predictability, sprint metrics such as burn rate, lead time, defect leakage, and deployment frequency. Drive retrospective outcomes into actionable improvements for team efficiency and product quality. Introduce process automation, backlog grooming discipline, and release readiness checklists to optimize delivery Work Experience Required Qualifications 8+ years in software delivery roles, with 5+ years as a Scrum Master or Agile Delivery Lead. Extensive admin experience in setting up and using JIRA tool to drive the software and data release process across multiple teams from backlog management to value based prioritization to sprint execution to deployments to release closure and documentation generation Built or scaled scrum practices from grounds up with successful adoption across multiple product/scrum teams Proven track record in SaaS product delivery, preferably with AI/ML-powered platforms. Strong technical foundation in cloud-native architectures (AWS/Azure/GCP), APIs, microservices, and data engineering workflows. Excellent servant-leadership, facilitation, and conflict resolution skills Proactive thinker with ability to influence stakeholder based on business objectives and value delivery Strong analytical ability to interpret technical and business metrics for decision-making. Familiarity with generative AI, NLP, and predictive analytics in healthcare contexts. Familiarity with ML model development, MLOps tools (MLflow, Kubeflow, SageMaker, Vertex AI), and data governance Experience in healthcare and/or pharma software, with strong understanding of EHR/EMR health records and HIPAA, GDPR, GxP, and FDA 21 CFR Part 11 compliance. Understanding of clinical trial systems and/or RWE (Real World Evidence) platforms, or drug discovery pipelines. Show more Show less
Job Requirements Role Summary We are looking for energetic, self-motivated, and exceptional Database/ETL Tester to work on extraordinary enterprise products based on AI and Big Data engineering. He will work with star team of Architects, Data Scientists/AI Specialists, Data Engineers and Integration. Experience Level: 5+ years Responsibilities Responsible for the testing of SQL server Transactions processing & Business Intelligence solutions in an Agile Development environment. Ensure data is consistent and accurate across all layers. Independently investigate data anomalies or inaccuracy patterns to solve data problems to ensure that all data is up to date. Develop and maintain test plans and create reusable test cases. Perform data analysis and creation of test data. Perform manual and automation execution of test scripts. Analyze and document defects found during test execution using JIRA. Assist with writing and performing data audits. Requirements Work Experience Should be hands-on in testing database elements modeled with SQL Server. Experience in Azure environment (Azure SQL, ADF, Blob etc.) is preferred Strong in validating data completeness and correctness between the data feeds and data lake. Should be hands-on in DB testing with SQL scripting experience. Strong knowledge in ETL processes. Exposure to AWS is a plus (S3, RDS-MSSQL, PostgreSQL etc.) Show more Show less
Job Requirements Responsibilities Primary owner of testing activities from planning through test execution Maintain and improve mobile and web test suites for patient-facing applications. Work closely with all members of the Product and DevOps teams – designers, technical architects, integration Analyst, AWS engineers, site reliability engineers, analysts, Digital and operational leadership -- to execute on the product vision. Maintain high standards of software quality within the team by establishing good practices and habits. From understanding of new product requirements through acceptance criteria and wireframes, create new test cases and integrate them into existing regression suites. Create a high visibility traceability matrix for new feature development. Assist the product teams with issue management and quality trends reporting. Assist with the creation of reports, manuals, and other documentation on the operation and maintenance of products. Assist with the analysis and resolution of application problems. Assess opportunities for application and process improvement. Provide second-level support for issue investigation and resolution. Serve as the advocate for the user experience. Own and drive the frontend and backend automation of our application platform, considering future product roadmap The production, test, and product release activities such as document/process creation, review and approval of batch records and associated documents. Also, product release and transactions. Drive and uphold high engineering standards and practices, bringing consistency to the codebases you encounter and ensuring software is adequately reviewed, tested, and integrated. Requirements 5+ years minimum of Experience as a software Automation Quality Assurance analyst. 5 years minimum Experience with a Test automation tool. Experience with automation tools like Selenium or MABL would be a huge plus. 5 years minimum Experience with a Test Management tool. 5 years minimum Experience with Agile SDLC. Experience creating and managing a product Traceability Matrix. Experience with Jira for issue documentation and management. Experience with test driven development. Experience with various testing strategies and methodologies and strong understanding of when to apply those different strategies. Strong judgment, with the ability to distill and communicate the true impacts of issues. Strong attention to detail. Analytical mindset with an interest in how digital solutions work. Ability to break complex and complicated issues down to root causes. Comfortable articulating and fighting for application quality during high-pressure release cycles. Understanding data standardization practices in pharma domain, integrating Healthcare data, and security related to HIPAA is a plus. Experience with Medidata EDC would be a huge plus Qualification Degree in computer science, information technology, or related field. Minimum of 5+ years of hands-on experience in test qualification and productionizing high performing and scalable services involving high volume of data processing within a software product development environment, preferably in life science industry Self-motivated with a passion for learning, analyzing technology tradeoffs, and shipping product