JAMA research: AI model explanations don’t meaningfully mitigate against bias
In a research study published this month in JAMA, computer scientists and clinicians from University of Michigan examined the use of artificial intelligence to help diagnose hospitalized patients.
Specifically, they were curious about how diagnostic accuracy is affected when clinicians have insights into how the AI models they’re using work – and how they may be biased or limited.
Use of image-based AI model explanations could help providers spot algorithms that could be systematically biased, and therefore inaccurate. But the researchers found that such explanatory guides “did not help clinicians recognize systematically biased AI models.”
In their efforts to assess how systematically biased AI affects diagnostic accuracy – and whether image-based model explanations could mitigate errors – the researchers designed a randomized clinical vignette survey study across 13 U.S. states involving hospitalist physicians, nurse practitioners, and physician assistants.
These clinicians were shown nine clinical vignettes of patients hospitalized with acute respiratory failure, including their presenting symptoms, physical examination, laboratory results, and chest radiographs.”
They were then asked to “determine the likelihood of pneumonia, heart failure or chronic obstructive pulmonary disease as the underlying cause(s) of each patient’s acute respiratory failure,” according to researchers.
Clinicians first were shown two vignettes without AI model input. Then they were randomized to see six vignettes with AI model input with or without AI model explanations. Among those six vignettes, three of them included standard-model predictions, and the other three included systematically biased model predictions.
Among the study’s findings: “Diagnostic accuracy significantly increased by 4.4% when clinicians reviewed a patient clinical vignette with standard AI model predictions and model explanations compared with baseline accuracy.”
On the other hand, however, accuracy decreased by more than 11% when clinicians were shown systematically biased AI model predictions, and model explanations didn’t protect against the negative effects of such inaccurate predictions.
As the researchers determined, while standard AI models can improve diagnostic accuracy, systematic bias reduced it, “and commonly used image-based AI model explanations did not mitigate this harmful effect.”
Mike Miliard is executive editor of Healthcare IT News
Email the writer: mike.miliard@himssmedia.com
Healthcare IT News is a HIMSS publication.
Source link