You are here
April 29, 2025
Brain-computer interface restores natural speech after paralysis
At a Glance
- Researchers developed a brain-computer interface that quickly translates brain activity into audible words.
- Such devices could allow people who鈥檝e lost the ability to speak from paralysis or disease to engage in natural conversations.

Brain injury from conditions like stroke can cause paralysis, including loss of the ability to speak. Scientists have been developing brain-computer interfaces, or BCIs, that can translate brain activity into written or audible words to restore communication. But earlier devices had a notable delay between a person thinking what they wanted to say and the computer delivering the words. Even brief time lags can disrupt the flow of a conversation, leaving people feeling frustrated or isolated.
An 三亿体育官网-funded team led by Dr. Edward F. Chang at the University of California, San Francisco, and Dr. Gopala Anumanchipalli at the University of California, Berkeley, set out to develop an improved brain-to-voice neuroprosthesis. The ideal device would stream audible speech without delay while a person silently attempted to speak.
The researchers implanted聽in a 47-year-old woman with paralysis聽an array of electrodes over the brain area where speech is encoded. She hadn鈥檛 been able to speak or make any vocal sounds for 18 years following a stroke. The team then used a deep learning system they designed to translate the woman鈥檚 thoughts into spoken words. Results appeared in Nature Neuroscience on March 31, 2025.
To train the system, the team recorded the woman鈥檚 brain activity as she silently attempted to speak a series of sentences. The sentences included more than 1,000 different words taken from social media and movie transcripts. Altogether, she made more than 23,000 silent attempts to speak more than 12,000 sentences.
A streaming brain-to-voice neuroprosthesis to restore naturalistic communication. Berkeley Engineering
The system was trained to decode words and turn them into speech in increments of 80 milliseconds (0.08 seconds). For comparison, people speak about three words per second, or around 130 words per minute. The system then delivered audible words using the woman鈥檚 voice, which was captured from a recording made before the stroke.
The system was able to decode the full vocabulary set at a rate of 47.5 words per minute. It could decode a simpler set of 50 words even more rapidly, at 90.9 words per minute. That鈥檚 much faster than an earlier device the researchers had developed, which decoded about 15 words per minute with a 50-word vocabulary. The new device had a more than 99% success rate in decoding and synthesizing speech in less than 80 milliseconds. It took less than a quarter of a second to translate speech-related brain activity into audible speech.
The researchers found that the system wasn鈥檛 limited to trained words or sentences. It could make out novel words and decode new sentences to produce fluent speech. The device could also produce speech indefinitely without interruption.
鈥淥ur streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,鈥 Anumanchipalli says. 鈥淯sing a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.鈥
鈥淭his new technology has tremendous potential for improving quality of life for people living with severe paralysis affecting speech,鈥 Chang says. 鈥淚t is exciting that the latest AI advances are greatly accelerating BCIs for practical real-world use in the near future.鈥
The聽results聽show that such devices can allow those unable to speak to join in more natural conversation again. But further study is needed to test the device in more people. The researchers also want to continue improving the system. For example, they want to allow changes in tone, pitch, and volume to produce speech reflecting a person鈥檚 emotional state.聽
鈥攂y Kendall K. Morgan, Ph.D.
Related Links
- Brain-Computer Interface Helps Paralyzed Man Speak
- Device Allows Paralyzed Man to Communicate with Words
- How the Brain Produces Speech
- Scientists Translate Brain Activity into Music
- Brain Decoder Turns a Person鈥檚 Brain Activity into Words
- System Turns Imagined Handwriting into Text
- Scientists Create Speech Using Brain Signals
References: . Littlejohn KT, Cho CJ, Liu JR, Silva AB, Yu B, Anderson VR, Kurtz-Miott CM, Brosler S, Kashyap AP, Hallinan IP, Shah A, Tu-Chan A, Ganguly K, Moses DA, Chang EF, Anumanchipalli GK. Nat Neurosci. 2025 Apr;28(4):902-912. doi: 10.1038/s41593-025-01905-6. Epub 2025 Mar 31. PMID: 40164740.
Funding: 三亿体育官网鈥檚 National Institute on Deafness and Other Communication Disorders (NIDCD); Japan Science and Technology Agency鈥檚 Moonshot Research and Development Program; Joan and Sandy Weill Foundation; Susan and Bill Oberndorf; Ron Conway; Graham and Christina Spencer and the William K. Bowes Jr. Foundation; UC Noyce Initiative; Rose Hills Innovator program; Google Research Scholar Award; National Science Foundation; BAIR.