Artificial intelligence — While Artificial intelligence has recently acquired grip in technology, it is also gaining traction in science.
Artificial intelligence is being studied by scientists from a range of fields.
A peer-reviewed research published in the Monday issue of Nature Neuroscience magazine, for example, detailed how it may be applied to brain activity.
According to the study, scientists created a noninvasive Artificial intelligence system that can turn people’s brain activity into a stream of text.
Artificial intelligence & neuroscience
Artificial intelligence can help neuroscience by making large-scale neuroscience dataset assessment more efficient and precise.
It offers the potential to generate more realistic models of neuronal systems and processes.
Artificial intelligence can also help with the creation of novel neurological diagnostic and therapy tools.
The system is referred to as a semantic decoder.
It might help those who have lost their physical capacity to communicate due to a stroke, paralysis, or other degenerative disorders.
Using a transformer model, researchers at the University of Texas in Austin devised the strategy.
The concept of the transformer is similar to that of OpenAI’s ChatGPT and Google’s Bard.
Participants in the most recent study learned how to use an fMRI machine’s decoder by listening to hours of podcasts.
It is also a larger piece of equipment that is used to track brain activity.
The semantic decoder does not require a surgical implant.
By employing machine learning algorithms to explore brain activity patterns related to language processing, Artificial intelligence may assist neuroscience in developing strategies for ideas to become text.
Artificial intelligence systems may recognize certain words or phrases that a person is thinking about by monitoring patterns of brain activity and then using this knowledge to produce associated text output.
This technology has the potential to transform communication for those who are unable to talk or type, such as those with severe paralysis or communication problems.
More research is needed, however, to improve the precision and dependability of these systems, as well as to address the ethical and privacy concerns associated with accessing and interpreting people’s thoughts.
As users listen to or anticipate hearing a new narrative, the Artificial intelligence system generates a stream of text.
The text created may not be an exact transcription, but the researchers wanted it to express key principles or ideas.
According to a new press release, the trained system generates language that roughly half of the time closely matches the intended context of the original thought of the participant.
If a research participant hears the words “I don’t have my driver’s license yet,” their first thought will be “She hasn’t even begun to learn to drive yet.”
Read also: Ban on TikTok rumors fuel Meta hopes
The absence of implants
Alexander Huth, one of the study’s major researchers, stated:
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences.”
“We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
Unlike previous decoding systems in development, the semantic decoder does not require surgical implants, making it noninvasive.
Participants are also not required to use only terms from a predefined list.
The researchers also addressed concerns about the technology’s possible abuse.
The researchers observed that decoding worked only when volunteers volunteered to teach the decoder.
Individuals who did not use the decoder produced incomprehensible results.
Furthermore, individuals who used the decoder but demonstrated resistance produced ineffective results.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” said researcher Jerry Tang.
“We want to make sure people only use these types of technologies when they want to and that it helps them.”
Due to the time required, an fMRI machine can only be used in the laboratory.
According to the researchers, the findings might be applied to other, more portable brain-imaging technologies, such as functional near-infrared spectroscopy (fNIRS).
“fNIRS measures where there’s more or less blood flow in the brain at different points in time, which, it turns out, is exactly the same kind of signal that fMRI is measuring,” said Huth.
“So, our exact kind of approach should translate to fNIRS.”