EUCAST-GPT-expert: New AI Model for Personalized Antibiotic Treatment

Written by Abigail Hodder (Reporter)

A research group at the University of Zurich (Switzerland) has used AI to help detect antimicrobial resistance (AMR) for the first time. The model, called “EUCAST-GPT-expert”, was customized from a foundational model to comply with a set of European guidelines for laboratory AMR testing. The AI could identify three types of resistance mechanisms, based on a combination of images and quantitative test results. This study shows how clinics could collaborate with AI to fast-track treatment regimens, only administering antibiotics that patients are sure to respond to.

AMR: The Clock Is Ticking

Antibiotic resistance represents a significant threat to global health. By 2050, more people are expected to die from bacterial infections than from cancer.

New strategies are needed to circumvent this growing problem, and fast.

Ideally, pharmaceutical companies could push new classes of antibiotics, with innovative biological targets, through the drug development pipeline. However, this process is costly, time-consuming, and comes with the risk that compounds they invest in could translate poorly to clinical practice.

Alternatively, one current approach to tackling this crisis is to slow down the development of resistance with careful selection of treatment regimens.

Increasing Demand from Laboratories Puts Healthcare Systems Under Strain

By analyzing patient samples, clinicians can identify genes or phenotypes  in bacteria that confer resistance to particular classes of antibiotics. They can then use this information to decide on an effective treatment course that the bacteria are susceptible to, preventing persistent strains from surviving and developing new resistance mechanisms.

For example, the Kirby-Bauer disk diffusion test is frequently used in laboratories to determine the sensitivity of bacteria to common types of antibiotics, such as β-lactams, cephalosporins, and carbapenems.

In this technique, discs soaked in antibiotic solutions are placed on bacteria-coated agar jelly; if the bacteria are susceptible, a clear ‘inhibitory zone’ will form around the disc. Scientists can then measure the size of these rings to calculate the level of resistance to different antibiotics.

However, interpreting results from this test can be slow, delaying treatment starts and potentially risking patient health. Additionally, the need for more precise diagnostic procedures could mean that pathology laboratories face unprecedented demands.

A lack of standardization further complicates the process, since pathologists may interpret results slightly differently, leading to variability in accuracy across clinics.

In these regards, while laboratory testing is a necessity for slowing down AMR, finding optimal treatment plans for patients remains a challenge, particularly in areas with limited access to healthcare or insufficient resourcing.

Addressing Challenges to Resistance Testing With AI

AI, however, may provide a means of tackling this issue. Using photos (or numerical values) obtained from common laboratory tests, AI models could learn to recognize features associated with different resistance mechanisms, helping pathologists quickly and accurately identify correct courses of antibiotics with more consistent accuracy across clinics.

A research group at the University of Zurich set out with this goal in mind.

For such a task, Christian G. Giske and his team at the Department of Laboratory Medicine focused on customizing a foundational model called GPT-4 (Generative Pre-Trained Transformer 4), a type of large language model (LLM .)

Initially, foundational models such as GPT-4 undergo an unsupervised training period, known as pre-training, where the algorithm is exposed to large amounts of data without any provided ‘instructions’ , to identify and learn general patterns in language.

The group acquired this pre-trained GPT agent and began to train the model specifically for their desired outcome, i.e., detecting bacterial resistance.

Firstly, GPT-4 underwent a process known as knowledge acquisition, where the foundational model is provided with more specific training information; in this instance, the machine was trained to recognize concentrations of antibiotics that correlate with the size of the inhibitory zone in the Kirby-Bauer disk diffusion test. From these values, the sample can be classified as ‘resistant’, ‘susceptible’, or an intermediate between the two.

Another core component of model customization involved equipping GPT-4 with knowledge of EUCAST (European Committee on Antimicrobial Susceptibility Testing) protocols and expert rules for resistance testing. These guidelines provide detailed instructions for performing laboratory tests to identify resistance mechanisms and interpreting the results accurately.

N.B: Due to the text-heavy nature of these guidelines, an AI model focused on understanding language (i.e., a large language model) was ideal for this study.

The model (later named “EUCSAT-GPT-expert by the group) was further refined to avoid obvious mistakes, such as misclassifying a bacterial species’ intrinsic resistance mechanisms as those acquired through genetic mutations.

Putting EUCAST-GPT-expert To The Test

Once training was complete, the team compared the performance EUCAST-GPT-expert with that of human specialists. Both the AI model and human experts were tasked with correctly identifying  four types of resistance (carbapenem resistance, β-lactam resistance, cephalosporin resistance, or no resistance) based on images and test results taken from bacterial samples.

EUCAST-GPT-expert displayed a sensitivity that was on par with human experts, with a mean score of 97.4% versus 96.8% in the classification task, respectively. This indicates that both AI and human experts identified a similar number of false negatives, i.e., instances where resistance mechanisms in bacterial samples went undetected.

On the other hand, the human professionals demonstrated a higher specificity in AMR classification, with a mean score of 98.2% compared to the AI’s score of 84.8%. This suggests that humans identified the resistance mechanisms present in the bacterial sample with greater accuracy, resulting in fewer false positive cases.

Implications for the Future

Nonetheless, while human specialists may classify AMR types more accurately, AI is undeniably faster. Balancing this trade-off between accuracy and efficiency will be essential when considering any future implementation of AI in clinical workflows. However, a collaborative approach could leverage the strengths of both, combining precision with speedy analysis for optimal laboratory testing.

Indeed, this work could be seen as a benchmark for future studies; with further refinement, AI models could learn to recognize a broader range of resistance mechanisms from laboratory data, and potentially with higher accuracy than seen in this study.

By working synergistically with human clinicians, AI models could facilitate faster turnaround times, allowing pathologists to quickly and effectively strategize treatment plans for patients with bacterial infections. Ideally, more precise antibiotic selection will help slow the progression of resistance over time, biding scientists time in the war against antibiotic resistance.