Scientists have made new improvements to a “brain decoder” using artificial intelligence (AI) to convert thoughts into text, according to live science.

THE New ‘Algorithm-Converter“He can quickly train an existing decoder in another person’s brain from the one on which he was trained, a new study said. Findings could one day help people with aphasia, a brain disorder that affects a person’s ability to communicate, scientists said.

Specifically, this brain decoder uses machine learning to turn a person’s thoughts into text, based on his brain’s responses into stories he hears. However, previous versions of the decoder demanded that participants listen to the stories in a magnetic resonance imaging machine for many hours and these decoders worked only for the people on which they were trained.

People with aphasia often have a problem in understanding language as well as in language production“, Said the co-author of the study Alexander Huth, a neuroscientist at the University of Texas (UT Austin). “Therefore, if this is the case, then we may not be able to create models for their brain by watching how their brain responds to the stories they hear

In the new study, published on February 6th in Current Biology magazine, Huth and co-author Jerry Tang, a postgraduate student at UT Austin, investigated how they could overcome this restriction. “In this study, we wondered, can we do things differently?“He said. “Can we actually carry a decoder we made for one person’s brain to another’s brain?

How did the research move

Scientists divided the participants into two groups. The researchers initially trained the brain decoder into some participants in Group 1, collecting functional data through magnetic resonance imaging, while the selected participants of Group 1 listened to 10 hours of radio stories.

They then trained two algorithms-converter to Group 1 participants and Group 2 participants: One decoder was trained using data collected while the participants of both teams dedicated 70 minutes listening to radio stories and the other trained 70 minutes. Mute Pixar short films that were not related to radio stories.

Using a technique called functional alignmentthe researchers mapped how the brain of Group 1 participants responded to the same sound or image stories. They used this information to train the decoder to work with the brain of Group 2 participants, without having to collect many hours of training data.

The team then tried the decoders using a short story that none of the participants had heard before. Although the decoder’s predictions were slightly more accurate for the few participants of Group 1 than for Group 1 and 2 participants who used the two algorithms-converter, the words predicted by the brain scanning of each participant were semantically related to those used in the trial history of each participant.

For example, did a section of test history include? who was discussing a job he didn’t like, saying ‘I am a waitress in an ice cream. Well, I don’t know … I don’t know where I want to be but I know it’s not here” The decoder using the algorithm-converter trained in films in films predicted: ‘I was in a job I thought was boring. I had to accept orders and I didn’t like it and I worked everyday” It doesn’t just fit – the decoder does not read the exact sounds that people have heard, Huth said, but ideas are related.

Future impact

The amazing and nice thing was that we can do this even without the use of language data“, Huth told Live Science. “So we can have data that we collect exactly while one is watching silent videos and then we can use them to create this language decoder for his brain

The team’s next steps are to try the algorithm-converter to aphasia participants and to “help them to say what they are thinking of saying,” Huth said.