(圖片來源:Nature)
很多人的內(nèi)心深處都安全地隱藏著不為人知的秘密,,即使動(dòng)用今日最先進(jìn)的腦成像技術(shù)也無法閱讀他們的心思。不過這些人要小心了,,科學(xué)家正在朝著這個(gè)方向努力。美國科學(xué)家近日利用創(chuàng)建的電腦模型和功能核磁共振成像(fMRI)掃描儀,,通過“解碼”神經(jīng)系統(tǒng)活性,,成功地確定出一個(gè)人剛剛所看到的圖片。相關(guān)論文3月5日在線發(fā)表于《自然》(Nature)雜志上,。
之前也有過類似的研究,,不過都比較簡單,包含的圖像要么太簡單,,要么就是已經(jīng)按類排好,。在最新的研究中,美國加州大學(xué)伯克利分校的神經(jīng)學(xué)家Jack Gallant和同事嘗試了難度更大的實(shí)驗(yàn)——利用大腦視覺皮層活性來確定受試人員所觀看圖片中的某一張,,即使他之前從未見過這張圖片,。
在實(shí)驗(yàn)的第一階段,兩個(gè)受試人員(Kendrick Kay和Thomas Naselaris)每人觀看包括多種物體和風(fēng)景的1750張圖片,,同時(shí)利用fMRI掃描儀監(jiān)測他們視覺皮層的反應(yīng),。基于這些數(shù)據(jù),,研究人員將視覺皮層劃分為很多小的立方塊,,并創(chuàng)建了一個(gè)數(shù)學(xué)模型來表現(xiàn)每個(gè)立方對不同視覺特征作出的反應(yīng)。通過結(jié)合數(shù)以百計(jì)的立方模型,,研究人員希望能夠預(yù)測視覺皮層怎樣對任意給定圖像作出反應(yīng),。
第二階段,Kay和Naselaris觀看了120張他們之前從未見過的圖片,,同時(shí)用fMRI掃描儀記錄下他們視覺皮層的活性,。研究人員將記錄的活性與模型預(yù)測的活性進(jìn)行了比較,結(jié)果發(fā)現(xiàn),,Naselaris的120張圖片模型預(yù)測對了110張,,Kay的120張對了86張。當(dāng)Naselaris再次觀測1000張新圖片,,模型仍舊能夠正確預(yù)測其中的82%,。
美國斯坦福大學(xué)的神經(jīng)學(xué)家Brian Wandell認(rèn)為,這一模型吸收了之前對視覺系統(tǒng)來之不易的發(fā)現(xiàn),,是一個(gè)很大的改進(jìn),。他說:“它應(yīng)用了我們對大腦的相關(guān)認(rèn)識(shí),在某種程度上比其它一些實(shí)驗(yàn)要深刻得多,。”
不過Gallant表示,,這并不意味著能“讀心”的大腦掃描儀就要出現(xiàn)了。此次的模型僅僅只能確定已知的圖片,,迄今為止,,還沒有電腦模型能夠利用fMRI數(shù)據(jù)重建人們的真實(shí)所見,,將來也許能夠重建夢境和記憶的視覺內(nèi)容,但這也是非常遙遠(yuǎn)的事情,。Gallant據(jù)此開玩笑道,,換句話說,某些心藏詭秘的人要想洗心革面重新做人,,還有的是時(shí)間,。(科學(xué)網(wǎng) 梅進(jìn)/編譯)
生物谷推薦原始出處:
(Nature),doi:10.1038/nature06713,,Kendrick N. Kay,,Jack L. Gallant
Identifying natural images from human brain activity
Kendrick N. Kay, Thomas Naselaris, Ryan J. Prenger & Jack L. Gallant
A challenging goal in neuroscience is to be able to read out, or decode, mental content from brain activity. Recent functional magnetic resonance imaging (fMRI) studies have decoded orientation, position and object category from activity in visual cortex. However, these studies typically used relatively simple stimuli (for example, gratings) or images drawn from fixed categories (for example, faces, houses), and decoding was based on previous measurements of brain activity evoked by those same stimuli or categories. To overcome these limitations, here we develop a decoding method based on quantitative receptive-field models that characterize the relationship between visual stimuli and fMRI activity in early visual areas. These models describe the tuning of individual voxels for space, orientation and spatial frequency, and are estimated directly from responses evoked by natural images. We show that these receptive-field models make it possible to identify, from a large set of completely novel natural images, which specific image was seen by an observer. Identification is not a mere consequence of the retinotopic organization of visual areas; simpler receptive-field models that describe only spatial tuning yield much poorer identification performance. Our results suggest that it may soon be possible to reconstruct a picture of a person's visual experience from measurements of brain activity alone.