1月31日,,《科學(xué)公共圖書館—生物》(PLoS Biology)上發(fā)表的的一項研究成果稱,,美國科學(xué)家證明,人體大腦會以不同的腦電波模式來應(yīng)對人們在對話中所聽到的字句,。由于人腦處理思維的方式與處理聲音的方式相似,,科學(xué)家們由此斷言,將來可以研制出一種“讀心”裝置,,植入腦損傷病人頭部便可了解病人在對話中確實聽到的字句,,從而推斷病人對對話的理解。
該研究由美國加州大學(xué)伯克利分校(University of California at Berkeley)一科研小組完成,。他們將一系列電極通過頭骨空隙連入15位癲癇病患者腦部,,接著向他們播放一段長達(dá)5至10分鐘的對話錄音。在病人聆聽的同時,,研究人員觀測病人腦部顳葉的活動情況,。顳葉負(fù)責(zé)語言處理。在這之后,,科學(xué)家建立兩組電腦模型將病人腦產(chǎn)生的明顯信號與個體錄音片段進行配對,。最后,為了驗證配對的準(zhǔn)確率,,研究人員將對話錄音進行拆解,,以字為單位單獨放給病人聽,再通過病人腦部產(chǎn)生的電波推測其聽到的具體單詞,。結(jié)果證明,,其中一組電腦模型準(zhǔn)確率高達(dá)90%。
對此,,研究成員之一羅伯特·奈特(Robert Knight)教授表示,,這對成千上萬由于中風(fēng)或路格里克氏病導(dǎo)致腦部受損,從而無法開口說話的患者來說無疑是天大的喜訊,。(生物谷 Bioon.com)
doi:10.1371/journal.pbio.1001251
PMC:
PMID:
Reconstructing Speech from Human Auditory Cortex
Brian N. Pasley, Stephen V. David, Nima Mesgarani, Adeen Flinker, Shihab A. Shamma, Nathan E. Crone, Robert T. Knight, Edward F. Chang.
How the human auditory system extracts perceptually relevant acoustic features of speech is unknown. To address this question, we used intracranial recordings from nonprimary auditory cortex in the human superior temporal gyrus to determine what acoustic information in speech sounds can be reconstructed from population neural activity. We found that slow and intermediate temporal fluctuations, such as those corresponding to syllable rate, were accurately reconstructed using a linear model based on the auditory spectrogram. However, reconstruction of fast temporal fluctuations, such as syllable onsets and offsets, required a nonlinear sound representation based on temporal modulation energy. Reconstruction accuracy was highest within the range of spectro-temporal fluctuations that have been found to be critical for speech intelligibility. The decoded speech representations allowed readout and identification of individual words directly from brain activity during single trial sound presentations. These findings reveal neural encoding mechanisms of speech acoustic parameters in higher order human auditory cortex.