Today’s daily dose of ChatGPT: breast cancer screening advice edition

Written by Emma Hall (Digital Editor)

Researchers have assessed the reliability and accuracy of ChatGPT breast cancer screening advice.

As if you haven’t already heard enough about ChatGPT, I’m here to tell you there is more you need to know… So turn that eye roll into eager anticipation, prop your feet up and get comfortable, I promise you will want to catch up on the latest gossip. So without further ado, let’s get down to business.

The new study involved a team of researchers from the University of Maryland School of Medicine (MD, USA) investigating the reliability and accuracy of breast cancer screening information provided by ChatGPT.

The team generated a list of 25 questions regarding breast cancer screening advice and repeated the each question to the AI chatbot three times, since ChatGPT is trained to alter its responses slightly each time, depending on wording and context.

The output results, published in the journal Radiology, were assessed by three radiologists who are fellowship-trained in mammography. The radiologists determined that answers given by ChatGPT were 88% accurate for 22 of the 25 questions posed. Of the remaining three questions, two responses were inconsistent and differed considerably between one another, whilst one response provided out-of-date information.

Among the reliable and accurate answers, ChatGPT provided correct information regarding breast cancer symptoms, high-risk groups and queries about the cost and recommended frequency of mammograms. This advice was presented in a clear and concise manner.

However, these results were not all roses. Compared to a standard Google search, the information provided by ChatGPT was significantly reduced and simplified. While the internet would provide a more holistic and extensive response from multiple sources of information, ChatGPT presented advice based on breast cancer screening information from the American Cancer Society, failing to mention alternative guidance offered in sources such as the US Preventative Services Task Force (USPSTF) and Centers for Disease Control and Prevention (CDC).

The outdated advice given by ChatGPT was concerning mammograms and COVID-19 immunization, where ChatGPT gave false and misguiding guidelines recommending that individuals should wait 4–6 weeks after receiving a COVID-19 vaccine to have a mammogram. This guidance was updated in February 2022 in both the CDC and the USPSTF, which now state that this waiting period is unnecessary.

Paul Yi, assistant professor of Diagnostic Radiology and Nuclear Medicine at UMSOM, urged patients to be cautious when seeking medical information from chatbots such as ChatGPT: “We’ve seen in our experience that ChatGPT sometimes makes up fake journal articles or health consortiums to support its claims. Consumers should be aware that these are new, unproven technologies, and should still rely on their doctor, rather than ChatGPT, for advice.”

As individuals are increasingly relying on large language models such as ChatGPT for medical advice, it is essential to assess the reliability and accuracy of the information these models provide in order to increase patient safety and security.

In the future, the team plan to evaluate the medical information and advice provided by ChatGPT in other medical fields, such as lung cancer screening. They aim to suggest refinements so that ChatGPT can provide more accurate and high quality medical advice that can be easily understood.