Telegram (AI) YouTube Facebook X
Ру
AI learns to turn human thoughts into text

AI learns to turn human thoughts into text

Researchers at the University of Texas at Austin have developed a non-invasive AI system that translates human brain activity into a stream of text. The study results were published in the journal Nature Neuroscience.

To collect brain activity data, participants were placed in an фМРТ scanner and were allowed to listen to several hours of podcasts. Next, the researchers created a decoder algorithm that operates on the principle of chatbots such as ChatGPT or Bard.

In the end, the trained AI system was able to generate a stream of text when a participant listens to recordings or imagines telling a new story.

The produced text is not an exact transcript. According to the researchers, the algorithm is more likely to capture general thoughts and ideas.

According to the press release, the trained system yields text that closely or exactly matches the intended meaning of the participant’s original words in about half the cases.

For example, when a participant heard the words \”I still haven’t obtained a driver’s license yet\” during the experiment, the thoughts were transformed into \”she hasn’t even started learning to drive yet\”.

\”For a non-invasive method, this is a real leap forward compared with what was done previously, when typically individual words or short phrases were used,\” said one of the study’s leaders, Alexander Khut.

In his view, the decoding model can sustain lengthy sessions and handle complex ideas.

Participants were also asked to watch four silent videos while in the scanner. As a result, the AI system was able to accurately recognise \”specific events\” from the clips, the researchers said.

The brain activity decoder cannot be used outside the laboratory because it requires an фМРТ scanner. But researchers believe the algorithm will be useful in the future when more portable brain-imaging systems become available.

The developers also believe the technology will benefit patients who have lost the ability to communicate physically after a stroke, paralysis, or other degenerative diseases.

The leading authors of the study intend to patent the technology.

In March, Japanese researchers taught AI to recreate images from brain activity data using Stable Diffusion.

Подписывайтесь на ForkLog в социальных сетях

Telegram (основной канал) Facebook X
Нашли ошибку в тексте? Выделите ее и нажмите CTRL+ENTER

Рассылки ForkLog: держите руку на пульсе биткоин-индустрии!

We use cookies to improve the quality of our service.

By using this website, you agree to the Privacy policy.

OK