Wait…since when was AI psychic?
AI mind-reading powers have been created in the form of an algorithm termed a ‘semantic decoder.’ This so-called decoder can translate an individual’s recorded brain activity into a stream of meaningful text.
They call it a semantic decoder. Sounds ominous right? No, you will not be able to read random people’s minds on the street by firing invisible glowing waves into their brains, or uncover the heavily guarded and impenetrable dinner location of where your partner actually wants to eat despite their ‘I don’t mind, wherever’ protests. But translating an individual’s brain activity into meaningful text is no longer an impossible and mystical feat.
Developed in a not-so-supernatural laboratory at the University of Texas at Austin (TX, USA), the semantic decoder is an AI system that can translate brain activity into a continuous flow of text. This novel technology could help individuals communicate understandably once more, who are conscious but cannot speak or have difficulty doing so, including those affected by strokes or those suffering from neurological diseases such as amyotrophic lateral sclerosis, also known as Lou Gehrig’s disease.
The decoder depends on a transformer model, a deep-learning neural network akin to those powering AI chatbots such as Google’s Bard and Open AI’s ChatGPT. A key innovative feature in this decoder that sets it apart from its previous counterparts is its noninvasiveness, as it does not require the use of surgical implants. Alternative non-invasive decoders are also limited by small datasets of words or phrases.
In this study, functional magnetic resonance imaging (fMRI) was utilized to record participants’ brain activity. The decoder was trained thoroughly through this recording of participants’ brain activity while they listened to hours of podcasts in the scanner.
Published in Nature Neuroscience, the results demonstrate that after training the machine on each participant, the decoder was able to produce representative text from the brain activity of each participant listening to new podcast stories (perceived speech), imagining telling a story (imagined speech) and also watching silent videos.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Alex Huth, an assistant professor of neuroscience and computer science at the University of Texas in Austin, explained. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
Although the decoder does not produce an exact word-for-word transcript, it is accurate in grasping the general basis of participants’ thoughts. Additionally, approximately 50% of the text that was generated by the decoder was very near to the intentional original words, sometimes exactly and precisely. For example, the decoder transcribed the thoughts of one participant that listened to the words “I didn’t know whether to scream, cry or run away. Instead I said ‘Leave me alone!’” to “Started to scream and cry, and then she just said, ‘I told you to leave me alone.’”
The researchers emphasize that this technology cannot be used should the subject elect not to comply, as cooperation is required for both training and implementing the decoder. Participants must lie in the fMRI scanner for up to 15 hours and pay close attention to the stories. The transcripts for participants who, for example, refused cooperation by thinking about other stories or imagining animals, were unintelligible. The same was the case for results for participants on whom the decoder had not been trained.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” declared Jerry Tang, a doctoral student in computer science at the University of Texas in Austin, addressing concerns over possible abuse of this novel technology. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
At present, the decoder cannot be utilized outside of the laboratory due to the sheer amount of time required on an fMRI, however, it has the potential to be applied to other more portable brain imaging systems, such as functional near-infrared spectroscopy.
“I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” Tang commented. “Regulating what these devices can be used for is also very important.”