Heart Scan Report Explanations Simplified with AI

Written by Mireia Cuevas Crespo (Reporter)

Generative AI promisingly simplifies complex heart scan data, a new study by NYU Langone Health (NY, US) has revealed.

A recent study by NYU Langone Health reveals that generative AI can simplify complex heart scan data for patients. Published in the Journal of the American College of Cardiology Cardiovascular Imaging, the report examined AI’s accuracy in creating simplified interpretations of echocardiography reports.

The Challenge of Medical Terminology

Echocardiograms use sound waves to create heart function images. These reports often contain technical data and medical terms that patients struggle to understand. The research team aimed to solve this problem using AI to transform complex reports into patient-friendly descriptions.

Study Methodology and Results

The team assessed 100 doctor echocardiography reports using GPT-4, OpenAI’s latest generative AI tool. Five board-certified echocardiographers evaluated the findings. They determined that 73% of the AI-generated explanations could be provided to patients without further modification. In terms of accuracy, 84% of AI explanations were judged “all true,” while 76% provided all necessary information. Only 9% of cases lacked key specifics, and none were deemed “potentially dangerous.”

Layperson Perspective

The researchers also surveyed non-clinical participants to gain a layperson’s perspective. The results were overwhelmingly positive, with 97% of participants finding the AI-generated reports easier to understand than the doctor-written ones. Notably, participants reported less distress after reading the simplified explanations, highlighting the importance of clear communication in healthcare.

Expert Opinion

Dr. Lior Jankelson, the study’s corresponding author, highlighted AI’s potential to support doctors in simplifying complex heart scan reports. He stated,

Our study, the first to evaluate GPT4 in this way, shows that generative AI models can be effective in helping clinicians to explain echocardiogram results to patients. Fast, accurate explanations may lessen patient worry and reduce the sometimes overwhelming volume of patient messages to clinicians.”

Challenges and Future Plans

Despite positive outcomes, 16% of AI-generated explanations contained incorrect information. Some reports included “AI hallucinations” – false assumptions with no proven foundation. This underscores the current need for human oversight of AI-generated content.

The NYU Langone Health study contributes to ongoing efforts to bridge the gap between medical expertise and patient understanding. The research team plans to use advanced AI tools in clinical practice to assess their impact on patient anxiety, satisfaction, and clinician effort.