A Blog by Jonathan Low

 

Jan 3, 2021

AI's Role In Covid: Savior or Saboteur?

The concern is that the regulation of AI use in the Covid context is lax, datasets are small and that there is substantial bias. The result is that premature application may result in misdiagnosis. JL

The Lancet comments:

The pandemic has forced health-care providers and governments around the world to accelerate the development of artificial intelligence (AI) tools and scale up their use in medicine, even before they are proven to work. The lax regulatory landscape for COVID-19 AI algorithms has raised substantial concern among medical researchers that COVID-19 AI models are poorly trained on small or low quality datasets with high risk of bias. If AI tools cannot be proven to discern one pneumonia from another, premature use of these technologies could increase misdiagnosis and sabotage clinical care for patients.
As 2020 draws to a close, one thing is certain: the COVID-19 pandemic has had an irreversible effect on the world. The effect on digital health is no exception. The pandemic has forced health-care providers and governments around the world to accelerate the development of artificial intelligence (AI) tools and scale up their use in medicine, even before they are proven to work. An untested AI algorithm has even received emergency authorisation from the US Food and Drug Administration. But will the use of untested AI systems help or hinder patients with COVID-19?
The lax regulatory landscape for COVID-19 AI algorithms has raised substantial concern among medical researchers. A living systematic review published in the BMJ highlights that COVID-19 AI models are poorly reported and trained on small or low quality datasets with high risk of bias. Gary Collins, Professor of Medical Statistics at the University of Oxford and co-author of the BMJ review told The Lancet Digital Health, “full and transparent reporting of all key details of the development and evaluation of prediction models for COVID-19 is vital. Failure to report important details not only contributes to research waste, but more importantly can lead to a poorly developed and evaluated model being used that could cause more harm than benefit in clinical decision making.”
To support transparent and reproducible reporting, source code and deidentified patient datasets for COVID-19 AI algorithms should be open and accessible to the research community. One such study, published in The Lancet Digital Health, reports a new AI COVID-19 screening test, named CURIAL AI, which uses routinely collected clinical data for patients presenting to hospital. In the hope that AI can help keep patients and health-care workers safe, Andrew Soltan and colleagues state that the AI test could allow exclusion of patients who do not have COVID-19 and ensure that patients with COVID-19 receive treatments rapidly. This is one of the largest AI studies to date with clinical data from more than a hundred thousand cases in the UK. Prospective validation of the AI screening test showed accurate and faster results compared with gold standard PCR tests.
However, like other COVID-19 AI models, CURIAL AI requires validation across geographically and ethnically diverse populations to assess its real-world performance. Soltan emphasised that “We also do not yet know if the AI model would generalise to patient cohorts in different countries, where patients may come to hospital with a different spectrum of medical problems.”
Even if preliminary models, like CURIAL AI, are proven to accurately diagnose disease in a wide range of populations, do they add clinical value to health-care systems? Last month, X, the Alphabet subsidiary announced that although they were able to develop an AI to identify features of electroencephalography data that might be useful for diagnosing depression and anxiety they found that experts were not convinced of the clinical value of the diagnostic aid. How AI tools for diagnosing health conditions can improve medical care is not always well understood by those developing the AI. Therefore, COVID-19 AI models must be developed in close collaboration with health-care workers, to understand how output of these models could be applied in patient care.
As we enter flu season, AI tools, like CURIAL AI, face an increasingly challenging task to help clinicians differentiate between two respiratory infections with similar symptoms. If AI tools cannot be proven to discern one pneumonia from another, premature use of these technologies could increase misdiagnosis and sabotage clinical care for patients. Mistakes like this, if allowed to scale, will slow future use of potentially life-saving technologies and compromise clinician and patient trust in AI. To assess true accuracy of AI tools for COVID-19, clinical trials are essential to establish how AI can support COVID-19 patients in the real world.
Soltan and colleagues are now planning clinical trials for deploying CURIAL AI within the existing clinical pathways at hospitals in the UK. The Lancet Digital Health strongly encourages researchers doing AI intervention-based clinical trials to follow the new extension guidelines SPIRIT-AI and CONSORT-AI. In our previous Editorial, we described the importance of these guidelines to support accurate and transparent evaluation of AI.
AI could be the saviour of the COVID-19 pandemic in the coming year; we just need to prove it.

0 comments:

Post a Comment