"Mind-Reading"

AI Shows Promise For Assistive Technologies

Artificial intelligence scanning a person’s thoughts and translating them into text might seem like the stuff of dystopian fiction, but the results of one study at the University of Texas are showing promise for different types of assistive technology.

About the Study

In the study, which was published in Nature Neuroscience, researchers at the University of Texas in Austin were able to record and decode study participants’ brain activity with a non-invasive decoder. The researchers then used an artificial intelligence algorithm similar to the one that underlies Chat GPT to translate that activity into text. 
 
The decoder employed functional magnetic resonance imaging (fMRI) to reconstruct continuous language from cortical semantic representations. It then generated word sequences that recovered the meaning of three different types of data: perceived speech, imagined speech, and silent videos. At the end, a Large Language Model (LLM) algorithm was used to translate the brain activity into continuous text.
 
A brain activity decoder is nothing new. In a 2021 study published in the New England Journal of Medicine, researchers decoded sentences from cortical activity with a success rate of nearly 75 percent, and individual words with an accuracy of 47.1 percent. In a study published in Nature that same year, a brain-computer interface decoded attempted handwriting movements from neural activity in the motor cortex–essentially allowing the paralyzed participant  to write simply by thinking about writing. The participant achieved a typing speed of 90 characters per minute with 94.1 percent accuracy.
 
The technologies in both of these studies, however, relied on neurological implants. The technology used in the University of Texas study allowed participants to turn imagined speech into actual speech without the use of surgical implants–and this is a first.
 
Jerry Tang, the graduate student who led the research, said in an April 25 press briefing,
 
“Currently, language-decoding is done using implanted devices that require neurosurgery, and our study is the first to decode continuous language, meaning more than full words or sentences, from non-invasive brain recordings, which we collect using functional MRI.”
 
The Process

In the first part of the study, three volunteers lay in an fMRI scanner for 16 hours, listening to podcasts. At the same time, the scanner recorded the participants’ brain activity. Researchers measured the blood flow through the participants’ brains and integrated this information with the details of the stories they were listening to. 
 
An LLM algorithm can provide information about how words relate to one another. By combining this ability with the information gathered from the fMRI scan, researchers were able to develop a map of how each of the participants’ brains responded to different words and phrases.

In the next part of the study, participants either watched a silent film, listened to a story, or imagined telling a story. Using the patterns encoded in the first part of the study, along with the algorithms that predict how a sentence is likely to be constructed based on the other words in that sentence, the researchers attempted to decode participants’ brain activity and render it in text form.

How Accurate Were the Results?
 
The decoder was able to get the gist of what study participants were thinking. It was also fairly accurate in describing what participants were viewing in the film.
 
At the same time, many of the sentences ultimately produced were inaccurate  For example:
 
Actual stimulus:
 
I didn’t know whether to scream cry or run away instead I said leave me alone I don’t need your help adam disappeared and I cleaned up alone crying
 
Decoded stimulus:
 
Started to scream and cry and then she just said I told you to leave me alone you can’t hurt me anymore I’m sorry and then he stormed off I thought he had left I started to cry
 
The researchers also found that participants could easily deceive the technology. In one part of the experiment, participants listened to a recorded story while imagining a different story. The decoder couldn’t determine which words the participants were hearing. 
 
In addition, the encoded maps of brain activity differed from person to person. A map created for one individual’s thoughts couldn’t be used to decode the thoughts of another, and researchers were unable to create a single map that worked for all participants.

Francisco Pereira, a neuroscientist with the US National Institute of Mental Health said that, given the monumental task of determining how the brain creates meaning from language, “It’s impressive to see someone pull it off." 
 
At the same time, this technology still has a ways to go before it can accurately read and render individual human thoughts in a specific way.

Medical Applications

Nonetheless, decoding brain activity and re-encoding it as text, without relying on invasive implants is a huge leap forward. And the results of the study are promising for the development of assistive technology for people with communication difficulties. 

Says Tang,  “Eventually, we hope that this technology can help people who have lost the ability to speak due to injuries like strokes, or diseases like ALS.”

What About Privacy Concerns?

With any new technology, it’s important to consider potential abuses. Mental privacy is sacrosanct, and when it comes to mind-reading technology, concerns about the protection of mental privacy are well founded.

But several findings of this study are encouraging in this regard.

First, as noted, both training the decoder and applying it to the data required the cooperation of test subjects. In addition, the decoder was easy for test subjects to deceive. The maps created from participants’ brain activity were individual to the participant. Also, researchers were unable to create a “one size fits all” brain activity map that worked for all of the test subjects.
 
And then there’s the fact that fMRI machines are large, difficult to use, and aren’t portable.
 
Still, ethicists agree that there’s a danger in the misuse of results of this type of technology, especially in a legal context. Similar issues have already arisen with polygraph results, for example. And, as the technology is, as of yet, unable to tell truth from fiction, its use as legal evidence poses particular problems for individuals with unwanted, intrusive thoughts that they would never act upon.
 
Says Gabriel Lázaro-Muñoz, a bioethicist at Harvard Medical School, “I think it’s a big wake-up call for policymakers and the public.”