Explainable AI: Revealing the brain’s secrets in dementia

Written by Harry Salt (Digital Editor)

A team of researchers has leveraged explainable AI to create a more nuanced method for diagnosing and understanding dementia through structural brain scans.

The study, recently published in npj Digital Medicine, utilizes convolutional neural networks (CNNs) trained on magnetic resonance imaging (MRI) data to distinguish between dementia patients and healthy controls. Crucially, the researchers employed a technique known as layerwise relevance propagation (LRP) to generate individual-level explanations of how the model arrives at its conclusions. This explainability addresses a profound and long-standing limitation of deep learning methods.

Traditional methods of diagnosing dementia typically involve identifying general areas in the brain that differ between patients and healthy individuals, based on large group comparisons. However, these approaches often lack the granularity needed for precise diagnosis and are limited in their ability to predict individual outcomes. By contrast, the AI method developed in this study provides spatially rich, personalized explanations of brain abnormalities, enhancing our understanding of the biological underpinnings of dementia in each patient.

To address the common issue of interpretability with deep learning models—often described as “black boxes”—the researchers implemented the aforementioned LRP (layerwise relevance propagation) technique. LRP works by tracing the decision-making process of the network backward, from the output layer to the input layer. This backward tracing reveals how each voxel (or pixel) in the MRI scan contributes to the model’s final decision, providing a “relevance map.” These maps visually represent the areas and features in the brain that the model identifies as significant for diagnosing dementia, thus making the AI’s decision-making process transparent and understandable. This method not only aids in validating the AI’s accuracy but also empowers medical professionals with a deeper understanding of the specific brain abnormalities associated with individual patient diagnoses.

The study found that the AI-generated insights are consistent with existing knowledge of dementia-related brain changes, such as atrophy in specific regions. More importantly, when applied to a longitudinal dataset of patients with mild cognitive impairment—a condition that often precedes dementia—the AI model successfully predicted which patients would progress to dementia. The model’s predictions were not only accurate but also provided detailed maps showing where brain changes are occurring, offering a valuable tool for early intervention.

The pivotal implication of this research lies in the development of explainable deep learning algorithms as their analytical prowess is tainted by difficulties in understanding how they work with input data. These findings underscore the potential of explainable AI in revolutionizing precision medicine by moving beyond generic models to highly personalized assessments. This approach could significantly improve the accuracy of diagnoses, offer insights into the progression of cognitive decline, and ultimately lead to more effective treatments tailored to individual patients’ needs.