
Meta develops AI for ‘reading minds’
The company Meta published an research on a ‘brain decoder’ that uses artificial intelligence to convert thoughts into speech.
According to the researchers, their method is based on non-invasive methods of recording brain activity. The researchers used electroencephalography (EEG) and magnetoencephalography (MEG), employing external sensors.
Using these, the team collected more than 150 hours of recordings from 169 healthy volunteers to train the algorithm. The developers noted that EEG and MEG are less reliable than implanted brain sensors. Therefore, they needed to collect more data to improve the accuracy of the AI model.
The algorithm recorded participants’ brain responses to audiobooks and individual phrases in English and Dutch. It then extracted the relevant words from the text and built a ‘dictionary’ that enabled the reverse process — decoding thoughts and converting them into text.

According to the researchers, the algorithm achieved 73% accuracy using a set of 793 words commonly used in everyday life.
Researchers believe their development will help not only millions of people who have lost the ability to speak and write, but also advance the study of the human brain.
In the future, the researchers plan to expand the algorithm’s initial vocabulary so that it can identify words more accurately.
Earlier, in August, Meta unveiled a chatbot with 175 billion parameters. Less than a week after the release, the virtual assistant was accused of antisemitism and criticisms of Facebook.
In July, Meta researchers developed the AI algorithm Sphere for fact-checking on Wikipedia.
In the same month, the tech giant introduced the AI model NLLB-200 for online translation. The algorithm supports 200 languages, including low-resource ones.
Subscribe to ForkLog’s Telegram news: ForkLog AI — all the news from the world of AI!
Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!