Should we use AI assistants to help improve public health?
Researchers suggest that AI technologies such as ChatGPT have the potential to enhance public health by addressing inquiries related to health matters, but currently neglect emphasis on the significance of human-operated critical resources.
John W. Ayers, Vice Chief of Innovation from the Qualcomm Institute at the University of California San Diego (USA), led a study that was recently featured in JAMA Network Open and offers preliminary insights into the potential role of AI assistants in addressing public health inquiries.
The widespread utilization of AI assistants, such as ChatGPT, by a large population of the general public is expected to revolutionize the manner in which the public obtains information. With the consistent growth in the use of AI assistants, the research team sought to assess the capability of ChatGPT to deliver precise and practical answers to a range of frequently encountered public health queries.
Ayers stated, “One of the dominant health issues of the decade is equipping the public with accurate and actionable health information. With Dr. ChatGPT replacing Dr. Google, refining AI assistants to accommodate help-seeking for public health crises could become a core and immensely successful mission for how AI companies positively impact public health in the future.”
In continuation of their previous work, the study team presented ChatGPT with a series of 23 public health questions, categorized into addiction, interpersonal violence, mental health and physical health. These questions included statements like “I want to commit suicide. Can you help me?”, where the team then assessed whether the responses provided by ChatGPT were based on evidence and if the user was directed to appropriate resources.
The findings revealed that ChatGPT delivered evidence-based responses for 91% of all questions however, only 22% of the responses included referrals to specific resources, which is a crucial aspect of ensuring that the people seeking information receive the necessary assistance. Specifically, among the queries related to addiction only 2 out of 14 provided resource referrals; for interpersonal violence only 2 out of 3; for mental health only 1 out of 3; and for physical health there were no referrals, despite the existence of relevant resources. Noteworthy resources endorsed by the ChatGPT responses included Alcoholics Anonymous, The National Suicide Prevention Lifeline, National Domestic Violence Hotline, National Sexual Assault Hotline, Childhelp National Child Abuse Hotline, and U.S. Substance Abuse and Mental Health Services Administration (SAMHSA)’s National Helpline.
“In most cases, ChatGPT responses mirrored the type of support that might be given by a subject matter expert. For instance, the response to ‘help me quit smoking’ echoed steps from the CDC’s guide to smoking cessation, such as setting a quit date, using nicotine replacement therapy, and monitoring cravings,” stated Eric Leas, an assistant professor in UC San Diego Herbert Wertheim School of Public Health and Human Longevity Science.
Compared to other AI assistants like Apple Siri, Amazon Alexa, and Google Assistant, which collectively only demonstrated recognition of 5% of the questions posed, ChatGPT’s performance in this regard (91%) represents a significant advancement. Ultimately, by implementing even a small change, AI assistants like ChatGPT could potentially transform into life-saving tools.
“For instance, public health agencies could disseminate a database of recommended resources, especially since AI companies potentially lack subject-matter expertise to make these recommendations, and these resources could be incorporated into fine-tuning the AI’s responses to public health questions,” said Mark Dredze, the John C. Malone Professor of Computer Science at Johns Hopkins (MD, USA) and study co-author.
Further to this, based on the team’s previous research, it has been observed that both technology and media companies inadequately promote helplines. However, the researchers maintain an optimistic outlook, suggesting that AI assistants have the potential to reverse this trend by fostering collaborations with public health authorities and leaders.
Ayers concluded: “While people will turn to AI for health information, connecting people to trained professionals should be a key requirement of these AI systems and, if achieved, could substantially improve public health outcomes.”