通過讓電腦學習人所看到的圖像和大腦活動的關系,,科學家成功將人腦活動的信息再現(xiàn)為圖像。不久之后,,利用此技術復原出夢中景象也不是不可能,。
日本國際電氣通信基礎技術研究院腦情報研究所室長神谷之康在12月11日出版的科學雜志《神經元》上發(fā)表了這項再現(xiàn)大腦活動的最新研究成果。
據神谷介紹,,夢和想象等人腦中出現(xiàn)的實際并不存在的圖像,,其實和看東西時一樣有“視覺皮質”在發(fā)揮作用,,“讀出夢和想象中的圖像已經不再遙遠”,。
神谷等人準備了400張圖片,,每張由縱橫各10小格共100格組成,,格子內用黑白兩色描繪出字母、方塊,、十字等記號。實驗中,,參加者每12秒看一幅圖片,,隨后通過功能性核磁共振儀(fMRI)測定視覺皮質腦血流變化,再利用聯(lián)系圖像和腦部活動的軟件讓電腦學習這一規(guī)律,。
之后,,參加者被要求觀看新的圖像,隨后將fMRI的測定結果輸入到電腦后,,電腦就能結合之前學習的規(guī)律將圖像基本重現(xiàn),。(生物谷Bioon.com)
生物谷推薦原始出處:
Neuron,Volume 60, Issue 5, 915-929,,Yoichi Miyawaki,,Yukiyasu Kamitani
Visual Image Reconstruction from Human Brain Activity using a Combination of Multiscale Local Image Decoders
Yoichi Miyawaki1,2,6,Hajime Uchida2,3,6,Okito Yamashita2,Masa-aki Sato2,Yusuke Morito4,5,Hiroki C. Tanabe4,5,Norihiro Sadato4,5andYukiyasu Kamitani2,3,,
1 National Institute of Information and Communications Technology, Kyoto, Japan
2 ATR Computational Neuroscience Laboratories, Kyoto, Japan
3 Nara Institute of Science and Technology, Nara, Japan
4 The Graduate University for Advanced Studies, Kanagawa, Japan
5 National Institute for Physiological Sciences, Aichi, Japan
6 These authors contributed equally to this work
Perceptual experience consists of an enormous number of possible states. Previous fMRI studies have predicted a perceptual state by classifying brain activity into prespecified categories. Constraint-free visual image reconstruction is more challenging, as it is impractical to specify brain activity for all possible images. In this study, we reconstructed visual images by combining local image bases of multiple scales, whose contrasts were independently decoded from fMRI activity by automatically selecting relevant voxels and exploiting their correlated patterns. Binary-contrast, 10 × 10-patch images (2100 possible states) were accurately reconstructed without any image prior on a single trial or volume basis by measuring brain activity only for several hundred random images. Reconstruction was also used to identify the presented image among millions of candidates. The results suggest that our approach provides an effective means to read out complex perceptual states from brain activity while discovering information representation in multivoxel patterns.