Center for Clinical Artificial Intelligence (CCAI)

The Center for Clinical Artificial Intelligence (CCAI) focuses on developing, implementing, and evaluating high performance clinical decision support (CDS) tools that are powered by artificial intelligence (AI) including machine learning. AI has the potential to support, enable and improve medical decision-making to make it faster, accurate, and economical. In particular, AI-enabled predictions, monitoring, alerting will power the next generation of CDS.

Goal

The goal of the center is to 1) develop AI tools for unmet clinical needs; 2) demonstrate internal and external validity of tools; 3) ensure that the tools are fair (algorithmic fairness), 4) obtain FDA approval, and 5) monitor for degradation in performance (algorithmic robustness). We are advancing both the science and the engineering of AI-enabled CDS to improve health.

  • 1) Develop AI solution. We will identify unmet clinical needs that can be addressed by using AI for pattern recognition or for prediction. The ML solution will generate insightful predictions that are actionable and have the potential to impact patient care. Specifically, we will focus on AI-enabled CDS that is used at the point of care and involves in-person patient interactions. Moreover, we will focus on using AI with a human in the loop rather than on autonomous AI with the human out of the loop.
  • 2) Demonstrate validity. Validation of an AI tool will typically involve multiple iterations. The initial iteration will use retrospective data to evaluate statistical validity, clinical utility, and economic utility. Statistical validity addresses the question: does the AI model perform well on metrics of discrimination and calibration? Clinical utility addresses the question: can the AI model improve clinical care and patient outcomes? Economic utility addresses the question: can the AI model produce cost savings and increase efficiency?
  • 3) Ensure fairness. The AI tools should be non-discriminatory (algorithmic fairness) for sensitive attributes such as age, sex, race, and socioeconomic status. Ensuring fairness is vital as AI tools increasingly play an important role in decisions related to health and the potential for harm increases.
  • 4) Obtain FDA approval. The Software as a Medical Device (SaMD) regulatory framework that is evolving at the FDA will enable rapid approval of AI that is designed to aid clinical decision-making. It is critical to obtain FDA certification for real-word deployment of AI tools.
  • 5) Monitor for degradation in performance. ML tools that are clinically deployed need to be evaluated and monitored for degradation in performance (algorithmic robustness) including degradation over time, across different geographical locations, and across populations that differ in disease severity or prevalence of the outcome.

Current projects at the CCAI include: