神經(jīng)科學(xué)家可通過分析病人的腦活動來重構(gòu)他所聽到的詞語,。
你認(rèn)為科學(xué)家們能夠解讀記憶還不夠嗎?現(xiàn)在他們也可以竊聽你的記憶了,。
在一項最新研究中,,神經(jīng)科學(xué)家們將一個電極網(wǎng)連接到15位病人的大腦中的聽覺中心,,并且記錄了當(dāng)病人聽到“爵士樂”或“瓦爾多”等詞語時,,大腦的活動狀況。
研究人員發(fā)現(xiàn),,每個詞語都能在大腦中產(chǎn)生自己獨(dú)特的模式,。
基于此,科學(xué)家們開發(fā)了兩種不同的電腦程序,,能夠只通過分析病人的腦活動就可重構(gòu)他所聽到的詞語,。
在這兩個程序中,較好的那一套程序在重構(gòu)時表現(xiàn)得很優(yōu)秀,,能夠讓研究人員以80%~90%的概率精確地解碼出那個神秘的詞語,。
因為此前已有證據(jù)表明,我們聽到的詞語和我們回憶或者想象的詞語會引發(fā)相似的腦過程,,所以,,這項1月31日在線發(fā)表于《公共科學(xué)圖書館―生物學(xué)》雜志上的最新發(fā)現(xiàn),意味著科學(xué)家們或許有一天能夠收聽到你正在想著的詞語,。這對那些因路格里克氏病或其他原因?qū)е聼o法開口說話的病人來說,,是一個潛在的福音。(生物谷 Bioon.com)
doi:10.1371/journal.pbio.1001251
PMC:
PMID:
Reconstructing Speech from Human Auditory Cortex
Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight,Edward F. Chang
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.