An AI-based decoder now exists that can translate brain activity into text and allows a person’s thoughts to be read non-invasively; however whilst it is a technical mind-reading machine, there’s still some work before we get to the dystopian dream of the Brace New World, but given the speed of development, it won’t be long before courts are able to install lie detectors in witness booths and the police interview you assisted by a truth-machine.
Dr Alexander Huth, of University of Texas (Austin) Neuroscience Department said: “We were kind of shocked that it works as well as it does. I’ve been working on this for 15 years … so it was shocking and exciting when it finally did work.”
The decoder reconstructs speech while people listen to a story or silently imagine one and whilst not new because surgical implants have allowed this, the current scheme uses only fMRI scan data and raises the prospect of new ways to restore speech in patients struggling to communicate due to a stroke or motor neurone disease and may even allow patients in certain types of coma to communicate what they are thinking. The Austin Neuroscience Department device overcomes a fundamental limitation of fMRI – the inherent time lag – which previously made tracking activity in real-time impossible as fMRI scans measure the blood flow response to brain activity which has only 10 second latency making any reading noisy and sluggish proxy, but GoLLMS (Generative Large Language Models) provided a new solution being able to represent, in numbers, the semantic meaning of speech, allowing scientists to map neural activity to strings of words with a particular meaning rather than attempting to identify activity word by word. Under the new system, the text closely or precisely matches the intended meanings of the original words about 50% of the time, giving output at the level of ideas, semantics, and meaning.
1. A participant’s words of “I don’t have my driver’s licence yet” is successfully decoded to “She has not even started to learn to drive yet”.
2. A participant’s words of “I didn’t know whether to scream, cry or run away. Instead, I said: ‘Leave me alone!’” were decoded as “Started to scream and cry, and then she just said: ‘I told you to leave me alone.’”
3. Participants asked to watch four short, silent videos had outputs that were decoded to accurately describe some of the content
….but like humans the machine struggles with certain aspects of language, including pronouns, although the engineers don’t know why. The decoding is also unique to individuals and a personalised decoder produces gibberish when used on someone else.
There are massive concerns that this system could be adopted for use by bad actors although it may open up new understanding of dreaming or let us work out how new ideas are generated by humans.
In a similar development, a team from the University of Oregon has built a system that can read people’s thoughts via brain scans, and reconstruct the faces they were visualising in their heads, although it’s still a bit inaccurate and borderline creepy. Following training and personalisation, the system managed to reconstruct faces seen by participants, based on activity from two separate regions in the brain: the angular gyrus (ANG), which is involved in a number of processes related to language, number processing, spatial awareness, and the formation of vivid memories; and the occipitotemporal cortex (OTC), which processes visual cues, but it is expected that as the system is perfected, brain activity will be able to develop perfect reconstructions of a crime scenes from memories, and develop accurate pictures of criminals seen by witnesses from their memory.

