牧风人分享 http://blog.sciencenet.cn/u/siccashq 一个湘里人的梦,便是风的自由舞步. 言所盼,非盼急出。

博文

筑波SEMINAR 科学家‘看见’大脑里的图像

已有 4552 次阅读 2008-3-12 10:47 |个人分类:筑波SEMINAR

科学美国人最新报道了NATURE上的一个新成果,说实在话没有这些科学编辑我大概不会去了解这些脑神经科学的艰深的学术论文。

美国加州伯克利大学的科学家用功能性核磁共振技术能够‘读出’人脑中看见的图像,看报道上的解释是这样:先让实验者看若干图像,然后用核磁共振仪器观察大脑皮质的反应,将皮质区分割成若干个单元,记录下这些单元的反应就得到一个图像的编码。根据这帮科学家的研究反过来可以根据大脑皮层的反应‘解码’所看见的图像。目前研究对于120个图像的辨认准确率能达到92%,1000副图像大概有80%,而对于100倍google上的图片来说,准确率估计也有10%以上,这个比瞎猜的成功率要高了很多。

不过,对于思维过程的图像变化等需要更复杂的数学模型,所以动态变化的解码还不可能,至于记忆、情绪和意图这些复杂思维暂时也不可能解码。

我觉得未来可以,呵呵。所以图片上小女孩要戴着金属头盔,以防别人知道了她心中秘密的白马王子是谁哦。 :)

NATURE上的文章:文章

原文:

http://www.sciam.com/article.cfm?id=translating-images-from-brain-waves&sc=rss

Do You See What I See? Translating Images out of Brain Waves

Visual decoder allows researchers to translate brain wave activity into images

By Nikhil Swaminathan

File this under futuristic (and perhaps a little scary): In a step toward one day perhaps deciphering visions and dreams, new research unveils an algorithm that can translate the activity in the minds of humans.

Scientists from the University of California, Berkeley, report in Nature today that they have developed a method capable of decoding the patterns in visual areas of the brain to determine what someone has seen. Needless to say, the potential implications for society are sweeping.

"This general visual decoder would have great scientific and practical use," the researchers say. "We could use the decoder to investigate differences in perception across people, to study covert mental processes such as attention, and perhaps even to access the visual content of purely mental phenomena such as dreams and imagery."

The scientists say that previous attempts to extract "mental content from brain activity" only allowed them to decode a finite number of patterns. Researchers would feed image to an individual (or ask them to think about an object) one at a time and then look for a corresponding brain activity pattern. "You would need to know [beforehand], for each thought you want to read out, what kind of pattern of activity goes with it," says John-Dylan Haynes, a professor at the Bernstein Center for Computational Neuroscience Berlin and the Max Planck Institute for Human Cognitive and Brain Sciences that was not affiliated with the new work.

"The advance brought forward here," he continues, "is that they have set up a mathematical model that captures the properties of the visual part of the brain," which can then be applied to previously unseen objects.

Researchers used functional magnetic resonance images (fMRIs) to record activity in the visual cortices of a pair of volunteers (two of the study's co-authors) while they viewed a series of images. They examined the brain by dividing the regions into voxels (volumetric units, or 3-D pixels) and noting the part of the picture to which each section responded. For instance, one voxel, or slice, might respond in a certain pattern to, say, colors in the upper left-hand corner of the photo, whereas another voxel would be set off by something in a different portion of the picture.

Haynes says the team could "go back and infer what the image was that a person was seeing" by monitoring the activity in each brain section and deciphering what sort of information would most likely be found in the corresponding part of the visual field, or photograph.

When the volunteers scanned a new set of 120 images—depicting everything from people to houses to animals to fruit and other objects—the computer program correctly identified what they were looking at up to 92 percent of the time; when the image pool was upped to 1,000, the algorithm was successful 80 percent of the time. Naturally, its accuracy decreased as the number of possible pictures grew, but even at a quantity 100 times greater than the number of images indexed on the Internet by Google, according to the scientists, the model would be successful greater than 10 percent of the time. (This far exceeds the success rate of random guessing.)

"This indicates," the researchers wrote, "that fMRI signals contain a considerable amount of stimulus information and that this information can be successfully decoded in practice."

Haynes says the method is limited to deciphering information that can be mapped out in space, such as sensory inputs (where a sound is coming from) or motor function (what action one's arm has performed). The challenge, he says, is that it cannot "be easily applied to cases where you don't have a clear mathematical model," such as memories, intentions and emotions. "High-level thoughts would be a bit tricky to get a hold of without such a mathematical model," he adds.

So, you can keep that tinfoil helmet in your closet for now. These algorithms still can't read our innermost thoughts—at least not yet.



https://blog.sciencenet.cn/blog-2317-17897.html

上一篇:筑波散记之七十五 松鼠会
下一篇:筑波散记之七十六 猪吃的都是好东西
收藏 IP: .*| 热度|

0

发表评论 评论 (7 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-4-24 22:13

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部