1月31日,,《科學(xué)公共圖書(shū)館—生物》(PLoS Biology)上發(fā)表的的一項(xiàng)研究成果稱(chēng),美國(guó)科學(xué)家證明,,人體大腦會(huì)以不同的腦電波模式來(lái)應(yīng)對(duì)人們?cè)趯?duì)話(huà)中所聽(tīng)到的字句,。由于人腦處理思維的方式與處理聲音的方式相似,科學(xué)家們由此斷言,,將來(lái)可以研制出一種“讀心”裝置,,植入腦損傷病人頭部便可了解病人在對(duì)話(huà)中確實(shí)聽(tīng)到的字句,從而推斷病人對(duì)對(duì)話(huà)的理解,。
該研究由美國(guó)加州大學(xué)伯克利分校(University of California at Berkeley)一科研小組完成,。他們將一系列電極通過(guò)頭骨空隙連入15位癲癇病患者腦部,接著向他們播放一段長(zhǎng)達(dá)5至10分鐘的對(duì)話(huà)錄音,。在病人聆聽(tīng)的同時(shí),,研究人員觀(guān)測(cè)病人腦部顳葉的活動(dòng)情況。顳葉負(fù)責(zé)語(yǔ)言處理,。在這之后,,科學(xué)家建立兩組電腦模型將病人腦產(chǎn)生的明顯信號(hào)與個(gè)體錄音片段進(jìn)行配對(duì),。最后,為了驗(yàn)證配對(duì)的準(zhǔn)確率,,研究人員將對(duì)話(huà)錄音進(jìn)行拆解,,以字為單位單獨(dú)放給病人聽(tīng),再通過(guò)病人腦部產(chǎn)生的電波推測(cè)其聽(tīng)到的具體單詞,。結(jié)果證明,,其中一組電腦模型準(zhǔn)確率高達(dá)90%。
對(duì)此,,研究成員之一羅伯特·奈特(Robert Knight)教授表示,,這對(duì)成千上萬(wàn)由于中風(fēng)或路格里克氏病導(dǎo)致腦部受損,從而無(wú)法開(kāi)口說(shuō)話(huà)的患者來(lái)說(shuō)無(wú)疑是天大的喜訊,。(生物谷 Bioon.com)
doi:10.1371/journal.pbio.1001251
PMC:
PMID:
Reconstructing Speech from Human Auditory Cortex
Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight, Edward F. Chang.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.