據(jù)11月7日的《科學》(Science)雜志報道說,,人們會在任何地方識別出一個他們所愛的人的聲音,但這是怎么做到的呢,?人腦是如何斷定他們所聽到的是誰的聲音并從聽到的話來辨別該聲音的呢,?荷蘭研究人員發(fā)現(xiàn)了腦的可幫助聆聽者對語音和說話者身份進行解碼的有關(guān)信號(這兩者都來自一個單一的聲音流),距離找到這些問題的答案已經(jīng)更加接近,。
Elia Formisano及其同事應(yīng)用功能性核磁共振成像術(shù)來監(jiān)控自愿者聽覺皮層的活動,,這些自愿者會聆聽錄自三個不同說話者的3個元音的聲音。研究人員接著用一種數(shù)據(jù)挖掘的運算法則來分析這些數(shù)據(jù)中出現(xiàn)的模式,。研究人員報告說,,無論說話者是誰,,與某一元音相關(guān)的模式都是相同的,而且無論某特別語音是什么,,與某說話者相關(guān)的模式也保持相同的狀態(tài),。這些結(jié)果還顯示,聽覺皮層與語音和說話者的身份的識別有部分的關(guān)系,,這一發(fā)現(xiàn)挑戰(zhàn)了一種被廣泛認可的假設(shè),,即這一過程僅僅發(fā)生于專門化的、更高水平的腦區(qū)域,。(生物谷Bioon.com)
生物谷推薦原始出處:
Science,,Vol. 322. no. 5903, pp. 970 - 973,Elia Formisano,,Rainer Goebel
"Who" Is Saying "What"? Brain-Based Decoding of Human Voice and Speech
Elia Formisano,* Federico De Martino, Milene Bonte, Rainer Goebel
Can we decipher speech content ("what" is being said) and speaker identity ("who" is saying it) from observations of brain activity of a listener? Here, we combine functional magnetic resonance imaging with a data-mining algorithm and retrieve what and whom a person is listening to from the neural fingerprints that speech and voice signals elicit in the listener's auditory cortex. These cortical fingerprints are spatially distributed and insensitive to acoustic variations of the input so as to permit the brain-based recognition of learned speech from unknown speakers and of learned voices from previously unheard utterances. Our findings unravel the detailed cortical layout and computational properties of the neural populations at the basis of human speech recognition and speaker identification.
Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, University of Maastricht, 6200 MD Maastricht, Netherlands.