• Media type: E-Article
  • Title: Audio-based AI classifiers show no evidence of improved COVID-19 screening over simple symptoms checkers
  • Contributor: Coppock, Harry [Author]; Nicholson, George [Author]; Kiskin, Ivan [Author]; Koutra, Vasiliki [Author]; Baker, Kieran [Author]; Budd, Jobie [Author]; Payne, Richard [Author]; Karoune, Emma [Author]; Hurley, David [Author]; Titcomb, Alexander [Author]; Egglestone, Sabrina [Author]; Tendero Cañadas, Ana [Author]; Butler, Lorraine [Author]; Jersakova, Radka [Author]; Mellor, Jonathon [Author]; Patel, Selina [Author]; Thornley, Tracey [Author]; Diggle, Peter [Author]; Richardson, Sylvia [Author]; Packham, Josef [Author]; Schuller, Björn W. [Author]; Pigoli, Davide [Author]; Gilmour, Steven [Author]; Roberts, Stephen [Author];
  • Published: Augsburg University Publication Server (OPUS), 2024
  • Language: English
  • DOI: https://doi.org/10.1038/s42256-023-00773-8
  • Origination:
  • Footnote: Diese Datenquelle enthält auch Bestandsnachweise, die nicht zu einem Volltext führen.
  • Description: Recent work has reported that respiratory audio-trained AI classifiers can accurately predict SARS-CoV-2 infection status. However, it has not yet been determined whether such model performance is driven by latent audio biomarkers with true causal links to SARS-CoV-2 infection or by confounding effects, such as recruitment bias, present in observational studies. Here we undertake a large-scale study of audio-based AI classifiers as part of the UK government’s pandemic response. We collect a dataset of audio recordings from 67,842 individuals, with linked metadata, of whom 23,514 had positive polymerase chain reaction tests for SARS-CoV-2. In an unadjusted analysis, similar to that in previous works, AI classifiers predict SARS-CoV-2 infection status with high accuracy (ROC–AUC = 0.846 [0.838–0.854]). However, after matching on measured confounders, such as self-reported symptoms, performance is much weaker (ROC–AUC = 0.619 [0.594–0.644]). Upon quantifying the utility of audio-based classifiers in practical settings, we find them to be outperformed by predictions on the basis of user-reported symptoms. We make best-practice recommendations for handling recruitment bias, and for assessing audio-based classifiers by their utility in relevant practical settings. Our work provides insights into the value of AI audio analysis and the importance of study design and treatment of confounders in AI-enabled diagnostics.
  • Access State: Open Access