
AI enables brain implant to convert thoughts into speech
Human augmentation is opening doors to a future where becoming superhuman is within reach. Since ancient times, people have strived to replicate lost human abilities. Common examples include glasses to restore vision, prostheses to enable walking or using arms, and hearing aids to enhance hearing. These early forms of human augmentation laid the groundwork for the advanced technologies we use today.
The latest achievement using an implant linking brains and computers raised hopes that these devices could allow people who have lost the ability to communicate to regain their voice. A brain implant using artificial intelligence (AI) was able to turn a paralyzed woman’s thoughts into speech almost simultaneously, United States researchers said on March 31, 2025. However, the technology is still at the experimental stage.
The experiment was performed on Ann, a 47-year-old former high school math teacher who has not been able to speak since suffering a stroke 18 years ago. The California-based team of researchers had previously used a brain-computer interface (BCI) to decode the thoughts of Ann and translate them into speech. But there was an eight-second delay between her thoughts and the speech being read aloud by a computer. This meant a flowing conversation was still out of reach for Ann.
But the team’s new model, revealed in the journal Nature Neuroscience, turned Ann’s thoughts into a version of her old speaking voice in 80-millisecond increments. In an interview with Agence France-Presse (AFP), senior study author Gopala Anumanchipalli of the University of California, Berkeley, said: “Our new streaming approach converts her brain signals to her customized voice in real time, within a second of her intent to speak.” He added: “Ann’s eventual goal is to become a university counselor.”
For the research, Ann was shown sentences on a screen — such as “You love me then” — which she would say to herself in her mind. Then her thoughts would be converted into her voice, which the researchers built up from recordings of her speaking before she was injured. Ann was “very excited to hear her voice, and reported a sense of embodiment,” Anumanchipalli said.

The BCI intercepts brain signals “after we’ve decided what to say, after we’ve decided what words to use and how to move our vocal tract muscles,” study co-author Cheol Jun Cho explained in a statement. The model uses an AI method called deep learning that was trained on Ann previously attempting to silently speak thousands of sentences.
Reference: Brain implant turns thoughts into speech in near real-time