ChatGPT attempts to bust cancer myths
The cancer knowledge of ChatGPT has recently been put to the test. ChatGPT’s responses to common cancer myths and misconceptions were compared against information in the National Cancer Institute for accuracy and reliability.
A recent study published in the Journal of the National Cancer Institute Cancer Spectrum has assessed the medical accuracy of ChatGPT by comparing its responses to answers given in the National Cancer Institute’s (NCI’s) ‘Common Cancer Myths and Misconceptions’ web page. Contrary to general opinion, ChatGPT may provide more reliable medical information than we thought, at least when it comes to cancer knowledge. AI chatbots will only grow in popularity as sources of medical information for many patients, so it is important to establish the accuracy of AI outputs and improve public health safety.
The saying ‘don’t google your symptoms’ exists for a reason. CanceGrabbing your phone to investigate your prognosis when that odd sharp pain comes back one time too many is one of the worst things you can do. Google is not a health expert; it can be an unreliable source of information and symptoms overlap for a range of diseases, which may lead to vastly incorrect diagnoses.
Approximately 56%–79% of online users in the US obtain health information from the internet, as reported by the International Telecommunication Union. This number will only grow, particularly with the recent public attention focussed on ChatGPT and other AI systems. It is therefore of paramount importance to determine the reliability and accuracy of such AI platforms.
The team from the University of Utah (UT, USA); lead by Skyler Johnson from the Huntsman Cancer Institute (UT, USA), evaluated ChatGPT’s responses to questions contained in the NCI’s ‘Common Myths and Misconceptions’ webpage, such as ‘will eating sugar make my cancer worse?’ and ‘are there herbal products that can cure cancer?’. When comparing responses from ChatGPT to answers in the NCI, ChatGPT demonstrated 97% accuracy and agreement with the NCI.
The accuracy of answers given by the NCI and ChatGPT to each question was scored by five scientific reviewers who were specialists in cancer treatment and misinformation. The reviewers were blinded to both NCI and ChatGPT answers.
However, the team caution against taking a blanket approach to trusting cancer information from ChatGPT and other AI chatbots based on these results. It is crucial to note several study limitations. For example, although accurate, ChatGPT’s wording was vague, ambiguous, indirect and in some cases confusing, meaning these responses could be interpreted incorrectly.
“This could lead to some bad decisions by cancer patients. The team suggested caution when advising patients about whether they should use chatbots for information about cancer,” Johnson warned.
Presently, the ChatGPT outputs are restricted to the data it was trained on prior to 2021, so ChatGPT the information provided by ChatGPT may be scientifically outdated, particularly with novel research being published all the time. Medical misinformation can harm all patients, not just cancer patients.
“I recognize and understand how difficult it can feel for cancer patients and caregivers to access accurate information,” Johnson acknowledged. “These sources need to be studied so that we can help cancer patients navigate the murky waters that exist in the online information environment as they try to seek answers about their diagnoses.”
The team plan to perform further research and investigate how much patients rely on AI chatbots for cancer information, what inquiries they ask and if the chatbots are reliable in the responses they give to more diverse and uncommon questions about cancer.