AROUND one in 10 Australians have used ChatGPT to ask medical questions, according to a survey of about 2,000 Australians conducted in mid-2024.
Researchers from the University of Sydney asked participants how often they used ChatGPT for health information purposes during the preceding six months, the type of questions they asked, and their trust in the responses.
Almost 85% of participants knew about ChatGPT and 10% had already used it to obtain health-related information, while a further 39% were considering doing so in the next six months.
Questions were most frequently about a specific health condition, finding out what symptoms mean, finding actions to take, and understanding medical terms, the researchers wrote in the Medical Journal of Australia.
Those who had asked ChatGPT health-related questions rated their trust in the tool as moderate, with a mean score of 3.1 out of 5.
They also found that 61% of people had asked at least one higher risk question - that is, questions related to taking action that would typically require the input of a doctor, rather than just general health information.
People who face barriers to healthcare access, such as those with limited English or low health literacy, were more likely to use ChatGPT, the researchers found.
"The types of health questions that pose a higher risk for the community will change as AI evolves, and identifying them will require further investigation," wrote the authors, led by behavioural scientist Dr Julie Ayre.
"Generative AI tools could be a further problem for health services and clinicians, adding to the already large volume of medical misinformation," they said, adding that there is an "urgent need to equip our community with the knowledge and skills to use generative AI tools safely, in order to ensure equity of access andbenefit".
The full paper is HERE.
MEANWHILE, a new study from the University of South Australia investigating people's trust in AI to make decisions shows that most people are more likely to trust AI in situations where the stakes are low, but less likely to trust AI in high-stakes situations.
Evaluating responses from nearly 2,000 participants across 20 countries, the researchers found that statistical literacy affects trustdifferently.
People who understand that AI algorithms work through pattern-based predictions, but also have risks and biases, were more sceptical of AI in high-stakes situations, such as employment, health or medical decisions.
However they were less sceptical in low-stakes situations, such as restaurant recommendations or music selection.
Those with poor statistical literacy or little familiarity with AI, on the other hand, were just as likely to trust algorithms for trivial choices as they were for critical decisions.
UniSA's Dr Florence Gabriel said there should be a concentrated effort to promote statistical and AI literacy among the general population so that people can better judge when to trust algorithmic decisions.
"An AI-generated algorithm is only as good as the data and coding that it's based on," Dr Gabriel said.
"We only need to look at the recent banning of DeepSeek to grasp how algorithms can produce biased or risky data depending on the content that it was built upon.
"People need to know more about how algorithms work, and we need to find ways to deliver this in clear, simple ways that are relevant to the user's needs and concerns," she concluded. KB
The above article was sent to subscribers in Pharmacy Daily's issue from 20 Feb 25
To see the full newsletter, see the embedded issue below or CLICK HERE to download Pharmacy Daily from 20 Feb 25