AI HAS been used for some time now in radiology, and can be invaluably impartial when it comes to diagnostics.
In one study, when human doctors evaluated radiographs of someone complaining of knee pain, they took into account demographic characteristics of the patient - whether consciously or not.
Women, the less well-off, and people of colour were taken less seriously than rich, white men.
But Dr AI can theoretically be trained to ignore these things and say what they see - and in that case, much more reliably.
However, in a recent study, Dr AI predicted "unrelated and implausible traits" from knee X-rays, such as whether patients abstained from eating refried beans or avoided drinking beer - and was surprisingly accurate.
So, do the knees know, or is something else going on here?
Apparently AI algorithms often rely on confounding variables - such as differences in X-ray equipment or where the clinic was - to make predictions, rather than medically meaningful features.
And once you eliminate one variable, AI fills the gap with another it previously ignored.
"These models can see patterns humans cannot, but not all patterns they identify are meaningful or reliable," cautioned the lead author, Dr Peter Schilling.
"It's crucial to recognise these risks to prevent misleading conclusions and ensure scientific integrity."
The above article was sent to subscribers in Pharmacy Daily's issue from 28 Jan 25
To see the full newsletter, see the embedded issue below or CLICK HERE to download Pharmacy Daily from 28 Jan 25
