神經(jīng)科學(xué)家可通過(guò)分析病人的腦活動(dòng)來(lái)重構(gòu)他所聽(tīng)到的詞語(yǔ)。
你認(rèn)為科學(xué)家們能夠解讀記憶還不夠嗎,?現(xiàn)在他們也可以竊聽(tīng)你的記憶了,。
在一項(xiàng)最新研究中,,神經(jīng)科學(xué)家們將一個(gè)電極網(wǎng)連接到15位病人的大腦中的聽(tīng)覺(jué)中心,并且記錄了當(dāng)病人聽(tīng)到“爵士樂(lè)”或“瓦爾多”等詞語(yǔ)時(shí),,大腦的活動(dòng)狀況,。
研究人員發(fā)現(xiàn),每個(gè)詞語(yǔ)都能在大腦中產(chǎn)生自己獨(dú)特的模式,。
基于此,,科學(xué)家們開(kāi)發(fā)了兩種不同的電腦程序,能夠只通過(guò)分析病人的腦活動(dòng)就可重構(gòu)他所聽(tīng)到的詞語(yǔ),。
在這兩個(gè)程序中,,較好的那一套程序在重構(gòu)時(shí)表現(xiàn)得很優(yōu)秀,能夠讓研究人員以80%~90%的概率精確地解碼出那個(gè)神秘的詞語(yǔ),。
因?yàn)榇饲耙延凶C據(jù)表明,,我們聽(tīng)到的詞語(yǔ)和我們回憶或者想象的詞語(yǔ)會(huì)引發(fā)相似的腦過(guò)程,所以,這項(xiàng)1月31日在線發(fā)表于《公共科學(xué)圖書(shū)館―生物學(xué)》雜志上的最新發(fā)現(xiàn),,意味著科學(xué)家們或許有一天能夠收聽(tīng)到你正在想著的詞語(yǔ),。這對(duì)那些因路格里克氏病或其他原因?qū)е聼o(wú)法開(kāi)口說(shuō)話的病人來(lái)說(shuō),是一個(gè)潛在的福音,。(生物谷 Bioon.com)
doi:10.1371/journal.pbio.1001251
PMC:
PMID:
Reconstructing Speech from Human Auditory Cortex
Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight,Edward F. Chang
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.