Scientists Have Developed An AI Decoder That Can Translate Brain Activity Into Words
Scientists Have Developed An AI Decoder That Can Translate Brain Activity Into Words
The artificial intelligence could one day be used to restore communicative abilities to individuals who are physically unable to speak.
Using a newly developed artificial intelligence tool known as a “semantic decoder,” researchers from the University of Texas at Austin can now translate a person’s brain activity into text.
Per a report from SciTechDaily, the semantic decoder AI system was trained by having participants listen to hours of podcasts while their brains were scanned using functional magnetic resonance imaging (fMRI) technology.
Based on the brain scan, the AI is able to generate text.
Notably, this method differs from other in-development language decoding systems because it is noninvasive and requires no surgery. Participants in similar studies were also frequently required to use words from a prescribed list, whereas the new semantic decoder has no such limitation.
The research, recently published in the journal Nature Neuroscience, was led by Jerry Tang, a doctoral student in computer science, and Alex Huth, an assistant professor of neuroscience and computer science at UT Austin.
“For a noninvasive method, this is a real leap forward compared to what’s been done before, which is typically single words or short sentences,” Huth said. “We’re getting the model to decode continuous language for extended periods of time with complicated ideas.”
But while the AI system can semi-accurately translate and decode a person’s thoughts, the language it outputs is not exact. Think of it less as “mind reading” and more as “mind interpreting.” And this is by design.
Rather than capture a word-for-word transcript of participants’ thoughts, researchers designed the AI to more or less summarize their thoughts and produce a transcript illustrating the main point.
In one instance, a participant heard a speaker say, “That night I went upstairs to what had been our bedroom and not knowing what else to do I turned out the lights and lay down on the floor.” The decoder then put out text that read, “We got back to my dorm room I had no idea where my bed was I just assumed I would sleep on it but instead I lay down on the floor.”
Generally speaking, though, the AI captured the gist of what participants were thinking a significant portion of the time.
But as with all things related to AI, ethical concerns are aplenty with this new technology, as are fears about the future implications of potential “mind-reading” technology.
In an article for the Nature Journal, bioethicist Gabriel Lázaro-Muñoz of the Harvard Medical School in Boston said “I’m not calling for panic, but the development of sophisticated, noninvasive technologies like this one seems to be closer on the horizon than we expected. I think it’s a big wake-up call for policymakers and the public.”
Meanwhile, others have asserted that the time for worry is not yet here, focusing instead on the positive potential of the AI semantic decoder’s capability to restore communicative function to individuals physically incapable of speech.
“I just don’t think it’s time to start worrying,” Dartmouth University science philosopher Adina Roskies said. “There are lots of other ways the government can tell what we’re thinking.”
Still, the ethical concerns about this new technology were not lost on Tang and Huth.
“We take very seriously the concerns that it could be used for bad purposes and have worked to avoid that,” Tang said. “We want to make sure people only use these types of technologies when they want to and that it helps them.”
Tang reiterated this stance during a press conference, drawing comparisons to polygraph tests. “The polygraph is not accurate but has had negative consequences,” he said. “Nobody’s brain should be decoded without their cooperation.”
Tang and Huth also urged policymakers to proactively consider and address the legal and illegal use cases for mind-reading technologies.
“I think right now, while the technology is in such an early state, it’s important to be proactive by enacting policies that protect people and their privacy,” Tang said. “Regulating what these devices can be used for is also very important.”
Comments
Post a Comment