AIRDetect: AI Works 1000x Faster Than Humans to Diagnose Blindness-Causing Diseases

Scientists at Moorfields Eye Hospital (London, UK) have developed an AI tool that can process complex images of the eye in just one second. This could help doctors rapidly detect eye diseases that have high risks of blindness, potentially saving years that would otherwise be spent on diagnoses.
Inherited Retinal Diseases Carry a High Risk of Blindness
Inherited retinal diseases (IRD) are a group of around 80 diseases that affect roughly 1 in 3,000 people, totalling an estimated 5.5 million people worldwide.
Whilst each form of IRD may develop differently, most involve damage to photoreceptors that are critical for vision. This is thought to be driven by an accumulation of reactive molecules in cells that can be due to mitochondrial dysfunction, chronic inflammation, and impaired antioxidant defences, all of which cause irreversible damage to the retina.
IRD is currently the leading cause of blindness in the working population of England and Wales. Environmental and socioeconomic factors can further exacerbate this reactive molecule accumulation, particularly in working-class populations who may experience higher exposure to pollutants, occupational blue-light exposure, and increased oxidative stress due to chronic stress or poor nutrition. Moreover, limited access to preventative healthcare and early interventions may also contribute to more severe disease progression.
FAF: Potentially Useful, but a FAF to Use
One of the speculated culprits behind IRD is lipofuscin, which is autofluorescent, meaning it naturally emits light when exposed to specific wavelengths. Fundus autofluorescence (FAF) is an advanced imaging technique that takes advantage of this property.
FAF captures detailed images by illuminating the retina with blue or green light, which excites autofluorescent molecules like lipofuscin. The emitted light is then recorded, creating a high-contrast map of metabolic activity in the retina.
Retinal cells with higher lipofuscin accumulation give out more light and appear brighter, while areas with less lipofuscin appear darker. These variations highlight regions that may be more affected by disease-related damage.
Clinicians use FAF to assess retinal health, monitor IRD severity, and detect specific patterns of autofluorescence linked to different IRDs. This makes FAF a valuable diagnostic tool for ophthalmologists.
Nonetheless, there are some drawbacks. Analyzing FAF images requires specialists to identify and annotate features, and this process is too time-consuming and costly to perform routinely in clinics. Doctors may also interpret images differently, causing uncertainty and inconsistent diagnostic decision-making.
Because of this, experts in the field are looking to AI for a solution. Automated tools could significantly cut time spent on image analysis, by automating feature annotation and simultaneously standardizing IRD diagnoses to ensure patient care is equal across all clinics.
AI Leads the Way for Diagnosing Inherited Retinal Diseases
A new study, published in Ophthalmology Science, has highlighted AI’s potential for automating FAF image analysis.
Led by senior author William Woof, the research team developed a specialized deep-learning model that could (1) identify patterns of five IRD features and the genetic mutations that cause them, and (2) quantify disease progression by analyzing retinal images to detect and measure specific features indicative of disease severity.
While AI has been previously used to study IRD, past models could only detect one type of IRD, neglecting ~80 other possible forms of the disease.
This new model, referred to as AIRDetect, is a convolutional neural network (CNN) with a specific no-new-UNet framework (nn-U-Net), a type of CNN that configures itself by adapting automatically to training data.
nn-U-Net divides FAF images into pixel-sized segments, where each pixel contributes to the feature identification and analysis process. This is based on the original U-Net architecture, where ‘contracting’ and ‘expanding’ pathways make up the iconic ‘U’ shape.
In the contracting phase, convolutional layers extract key aspects of images by using ‘kernels’, which is like adding a filter in a photo-editing app. The spatial dimensions are reduced to allow the model to focus only on the most relevant features. At the bottom of the ‘U’, the image is at its lowest resolution.
During expansion, image regions are upsampled to increase resolution, and this re-constructs the original dimensions. The output layer then produces a segmentation map of the image, consisting of channels that correspond to a different feature – for example, one channel might segment an image as a map of the blood vessels in the back of the eye.
AIRDetect Can Work Substantially Quicker than Human Experts
AIRDetect was trained on a dataset containing over 900 manually annotated images, taken from 700 patients at Moorfields Eye Hospital and Royal Liverpool Hospital in the UK.
The AI then took on the challenge of segmenting 45,749 images; at a rate of only one second per image, this only took 3 to 4 hours in total. By comparison, a human expert might take about 1000 times longer than the AI, spending roughly 5 to 30 minutes processing each image. Astonishingly, to analyze 45,749 images, this would add up to anywhere between 2 to 12 years, assuming they were working full-time.
To validate their model, the group looked at the overlap between AIRDetect’s annotations and those done by medical specialists, finding that the AI’s outputs correlated with that of humans 65-86% of the time.
They also reported that AIRDetect could successfully identify known links between genetic and physical characteristics, while also unveiling some new associations that have not been previously discovered.
Does This Study Define the Future of IRD?
AIRDetect’s development forecasts an exciting change in the way that life-changing retinal diseases are diagnosed. Although FAF imaging is a powerful tool, there are significant barriers to its applications, such as the time needed to manually annotate images, the expense of specialist resources, and variations in clinicians’ interpretations that impede diagnostic accuracy.
AIRDetect can automatically extract FAF image features on a much quicker timescale, processing nearly 46,000 images in the time that a human might just be able to annotate 8 of those images.
At the same time, this could significantly cut costs, requiring less input from highly-trained experts, while also ensuring that image analysis is standardized and diagnoses are as accurate as possible.
It will be interesting to see how this study will shape the future of ocular diagnostics.