Friday, November 8, 2024
Uncategorized

Scientists recorded a Pink Floyd song from patients’ brain waves. The tech could eventually allow for communication without words

If you thought your inner world only existed inside your head, bookmark that thought.

It may not always be that way, and that’s a good thing for patients unable to speak due to neurological problems—and eventually, for anyone who wants to work more efficiently, researchers at the University of California Berkeley say.

Dr. Robert Knight, a professor of psychology and neuroscience, and Ludovic Bellier, a postdoctoral researcher in human cognitive neuroscience, analyzed the electrical activity of 29 epileptic patients undergoing brain surgery at Albany Medical Center in New York.

The patients, who had volunteered for research, had electrodes placed onto the surface of their brain. While receiving surgery they hoped would cure intractable seizures, Pink Floyd’s 1979 single “Another Brick in the Wall, Part 1” played in the operating room.

Using artificial intelligence, Bellier was able to reconstruct the song from that electrical activity in each patient’s brain, according to an article published Tuesday in the journal PLoS Biology.

The end product is equal parts eerie and intriguing. Listen to one example here:

Better brain-machine interfaces

Knight said he and Bellier chose to study the brain’s perception of melody, not merely voice, because “music is universal.”

“It preceded language development, I think, and is cross-cultural,” he said. “If I go to other countries, I don’t know what they’re saying to me in their language, but I can appreciate their music.”

Even more important: “Music allows us to add semantics, extraction, prosody, emotion, and rhythm to language.”

Bellier’s work will be used to develop even better brain-machine interfaces, which can be used by paralyzed patients like the late Stephen Hawking to express themselves, Knight said—only not so robotically, and eventually, perhaps, merely by thinking.

The duo also hopes the research can help illuminate why some patients with speech disorders can sing but not speak. It also has potential implications for stroke and ALS patients, as well as those with non-verbal apraxia, a condition in which patients can’t make the movements necessary for speech.

A ‘keyboard for the mind’

The research will first be applied to those with medical needs, Bellier said. As brain recording technology improves, it may eventually be possible to transmit thoughts through scalp electrodes. Such electrodes can currently be used to signal one’s choice of a single letter from a string of letters—but it takes at least 20 seconds to identify each letter, making communication far too cumbersome.

If the technology is streamlined, it may eventually aid those without disabling conditions—think thought workers—more easily sync with a computer to type text from their minds.

“It’s really about reducing friction and allowing people to just think their action,” Bellier said. One example: “You could think, ‘Order my Uber,’ and you don’t have to finish what you’re doing—your Uber arrives.”

For those alarmed by potential future applications of the research, Knight and Bellier emphasize that such feats aren’t currently possible without surgery. And the A.I. developed to translate signals into sounds “merely provides the keyboard for the mind,” they assert.

As for the potential of privacy concerns to develop, Bellier said he’d be more worried about what Big Tech knows about us now, thanks to the monitoring and tracking of online activity.

Besides, privacy issues can be dealt with, he said. When a wireless EEG is completed on a patient, the signal is encrypted. 

“We’re on the threshold of lots of things—the fusion of neuroscience and computer engineering, and really, in many ways, the sky’s the limit,” he said.

Added Knight: “I think we’re just on the edge of tickling this whole story.”

source

Leave a Reply

Your email address will not be published. Required fields are marked *