您现在的位置是:Neuroscientists use brain waves to reconstruct Pink Floyd song >>正文

Neuroscientists use brain waves to reconstruct Pink Floyd song

上海品茶网 - 夜上海最新论坛社区 - 上海千花论坛7724人已围观

简介By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.Scientist...

By subscribing, you agree to our Terms of Use and Policies You may unsubscribe at any time.

Scientists have previously succeeded in predicting the words of a person engaged in a normal conversation by simply decoding electrical activity in the brain’s temporal lobe.

Neuroscientists use brain waves to reconstruct Pink Floyd song

Eleven years later, a team of scientists from the same laboratory at the University of California, Berkeley, was able to reconstruct a Pink Floyd song from the brain waves of 29 people using nonlinear models (meaning that the output changes by different amounts due to different changes in the input).

"Noninvasive techniques are just not accurate enough today. Let's hope, for patients, that in the future we could, from just electrodes placed outside on the skull, read activity from deeper regions of the brain with a good signal quality. But we are far from there," said Ludovic Bellier, postdoctoral fellow and co-author of the study, in a press release.

See Also Related
  • A brain injection filled with lab-grown neurons promises to cure epilepsy 
  • Robot-assisted deep brain stimulation surgery could treat epilepsy 

2,668 electrodes implanted into the brain

The scientists chose to play the iconic band’s song ‘Another Brick in the Wall, Part 1’ in the Albany Medical Center hospital suite in New York as neuroscientists prepared to conduct surgeries on the patients.

You can listen to the reconstructed song here.

The study had a cohort of 29 patients with epilepsy and all patients volunteered and gave their written informed consent prior to their participation. The patients had strips of 2,668 electrodes surgically implanted in their brains.

They passively listened to the 1979 hit as they were being prepared for epilepsy surgery. They were instructed to listen attentively to the 190.72-second-long song without focusing on any special detail.

AI's role in the study

“In addition to stimulus reconstruction, we also adopted an encoding approach to test whether recent speech findings generalize to music perception. Encoding models predict neural activity at one electrode from a representation of the stimulus,” said the study.

The team then used artificial intelligence software to decode the neural activity and was able to reconstruct the song from the brain recordings. This is the first time a song has been reconstructed from intracranial electroencephalography (iEEG) recordings.

The scientists believe this could be groundbreaking for people who have trouble communicating. Recordings from electrodes on the brain surface have been previously used to decipher speech, but the scientists’ current explorations could help reproduce the musicality of speech, which would be an upgrade from today’s robot-like reconstructions.

"It's a wonderful result," said Robert Knight, a professor at UC Berkeley and co-author of the study. "One of the things for me about music is it has prosody and emotional content. As this whole field of brain machine interfaces progresses, this gives you a way to add musicality to future brain implants for people who need it, someone who's got ALS or some other disabling neurological or developmental disorder compromising speech output.”

The scientists are also hopeful that someday, recording neural activities will be possible without invasive surgeries involving opening up the brain by way of using sensitive electrodes attached to the scalp.

The study was published in the journal PLOS Biology.

Study abstract:

Music is core to human experience, yet the precise neural dynamics underlying music perception remain unknown. We analyzed a unique intracranial electroencephalography (iEEG) dataset of 29 patients who listened to a Pink Floyd song and applied a stimulus reconstruction approach previously used in the speech domain. We successfully reconstructed a recognizable song from direct neural recordings and quantified the impact of different factors on decoding accuracy. Combining encoding and decoding analyses, we found a right-hemisphere dominance for music perception with a primary role of the superior temporal gyrus (STG), evidenced a new STG subregion tuned to musical rhythm, and defined an anterior–posterior STG organization exhibiting sustained and onset responses to musical elements. Our findings show the feasibility of applying predictive modeling on short datasets acquired in single patients, paving the way for adding musical elements to brain–computer interface (BCI) applications.

Tags:

相关文章



友情链接