|||
被批20年无进展,人工智能需要重启?
五月初,《科技导报》记者李娜就MIT“大脑,思维,机器”研讨会上的专家讨论观点问我的看法时,本想请陆汝钤院士来解读,但他刚巧不在,又请中国人工智能学会理事长李德毅院士来回答,李娜却联系不上。因出版时限,只好自己上阵。个人观点片面,没有细想,现附上《科技导报》和《MIT技术评论》英文原文,敬请各位发表您的看法。
“为什么没有机器人能修复日本的核反应堆?原因是人工智能研究在60年代和70年代取得了很大的进步,但随后走上了错误道路。”最近人工智能(AI)和认知科学领域的奠基人和老牌代表人物在MIT举行了一次讨论会,认为AI在20年80年代之后停滞不前,AI研究需要一次重启。
回归基础?又一次老生常谈的争论
在20世纪50年代倡导了神经网络,并致力于推动人工智能和机器人领域的重要进步的Marvin Minsky在此次的研讨会上说,今天的学生们虽然津津乐道于机器人能打篮球、踢足球、跳舞或搞笑的功能,但实质上这些机器人并没有变得更智能、更聪明。1972~1997年间担任MIT人工智能实验室主任的Patrick Winston同意Minsky的说法。“对于没有进展这个观点,许多人是反对的,但是我认为没有人能否认,过去20年中AI本来是应该取得更大进展的,问题发生在80年代。”
20世纪80年代发生了什么?Patrick Winston认为,上世纪80年代,冷战结束后AI研究的资金流开始枯竭,研究人员尝试探索商业化AI。由此产生的最大的问题是AI研究日益狭窄和专业化,如神经网络和遗传算法,而基础问题乏人问津,没有进展。因此,会议主张回归早期的研究模式,将狭窄的应用驱动研究回归到好奇心驱动研究。
这并不是一个新鲜的话题。据中国科学院自动化研究所王飞跃研究员介绍,关于AI回归基础研究的争论上世纪90年代就已经出现了,原因是当时AI领域的确出现了问题,“许诺太多,很多实现不了,连当时人工智能专业的毕业生都很难找到工作。这主要是20世纪80年代技术水平与当时的思想发展水平无法匹配造成的,比如当时AI领域允诺解决GPS这样的大问题,但是技术水平根本达不到,可以说当时这个领域存在着巨大的泡沫。”
尽管Marvin Minsky、Patrick Winston这些老牌代表人物都认为人工智能近20年来没有进展,但是王飞跃并未持相同观点。“Baidu、Google、Facebook这些公司的崛起,不都是人工智能发展的证明吗?它们都是人工智能在机器学习、数据挖掘等领域的产业成果。”
是回归还是轮转?
麻省理工麦戈文脑科学研究所的创始人之一Emilio Bizzi认为,研究人员应该专注于研究人类智力的重要元素,如能够概括学习经验,或流畅的规划动作以回避障碍物,来达到一定的目的,如抓住一副眼镜。他还认为,“未来几年,我们将有很大的进展,原因是分散在世界上不同地方的实验室都在从事仿人机器人研究。”
如果说人工智能的研究要从狭窄的应用驱动回归到好奇心的驱动,王飞跃非常认同;但是如果回归基础研究正如Emilio Bizzi所指的是回归到对人的智能的研究本身上去,王飞跃则并不认同这种观点。他认为: 首先, 人的智能是一种生物系统,而人工智能并不应只是建立一种机械式的生物系统, 更多地应考虑基于机器特征和能力的机器智能;其次, 上世纪九十年代初发展起来的计算智能已在很大程度上改变了智能研究的格局,人工智能的未来研究无法摆脱其影响,不能也不应回到过去;最后,上世纪八十年代国际人工智能界就有"干净派(neats)"与"邋遢派(scruffies)"的大辩论, 与今天的讨论并无太多的差别,辩论的结果就是后来传统人工智能的研究差点整体出局;基本上,人工智能的研究是逻辑推理、数据推理、行为推理等之间“30年河东、30年河西”的轮换,这是本领域科技发展的必然,不是坏事;目前基于数据或数据驱动的研究的确应该向逻辑推理回溯一些,但是他们之间是一种互相补充的关系,而不是相互取代的关系。
王飞跃研究员告诉《科技导报》,从技术路线来看,上世纪60、70年代甚至到80年代,人工智能的研究主要是基于逻辑推理,不管是以“干净”还是以“邋遢”的方式;90年代开始出现基于数据的数据挖掘和机器学习等计算智能研究领域,现在则该是走向大一统迈进的研究路径的时候了,基于数据和逻辑推理之间的还要融入社会计算、行为计算等,可以说向更加接近人的智能方向靠近,但不是简单地从人的生物智能出发,还加了人的社会智能等等。“现在人工智能领域最多的工作就是机器学习研究,最初这种研究只是占到2%~3%,现在已经超过了50%,中国的人工智能研究目前大多是机器学习和数据挖掘领域的研究,但这种情况不会也不应永远下去。”
互联网将给AI又一春
据《人工智能的历史》一书资料显示, 1956年达特茅斯会议后,人工智能被确立为一门学科,之后几起几落。
1956年到1974年是人工智能的第一个黄金时代。对许多人而言,这一阶段开发出的程序堪称神奇:计算机可以解决代数应用题,证明几何定理,学习和使用英语。当时大多数人几乎无法相信机器能够如此“智能”。研究者们相当乐观地认为具有完全智能的机器将在二十年内出现。
到了20世纪70年代,AI开始遭遇批评,随之而来的还有资金上的困难。AI研究者们对其课题的难度未能作出正确判断:此前的过于乐观使人们期望过高,当承诺无法兑现时,对AI的资助就缩减或取消了。
20世纪80年代AI迎来二度繁荣,名为“专家系统”的AI程序开始为全世界的公司所采纳,而“知识处理”成为了主流AI研究的焦点。
不过AI在随后的1987~1993又遭遇了第二次低谷。80年代中商业机构对AI的追捧与冷落符合经济泡沫的经典模式,泡沫的破裂也在政府机构和投资者对AI的观察之中。1987年AI硬件市场需求的突然下跌开启了AI之冬。
现在的AI,终于实现了它最初的一些目标。它已被成功地用在技术产业中,不过有时是在幕后。“实现人类水平的智能”这一最初的梦想曾在20世纪60年代令全世界的想象力为之着迷,其失败的原因至今仍众说纷纭。各种因素的合力将AI拆分为各自为战的几个子领域,“AI比以往的任何时候都更加谨慎,却也更加成功”。
“现在可以这么说了,AI时代真的要到来了。因为一方面有社会和商业的需求驱动,另一方面互联网技术的发展也为AI提供了极好的发展环境,Baidu、Google、Facebook这样的公司才会应运而生。”王飞跃研究员并不像Marvin Minsky那样主张AI重启,“网上的数据海洋不是已冲到你的家门口了,而是已经到了床边,好在数字之水淹不死人,但要回到一个舒心的环境,似乎只有AI的方法才能清理这数据的超载(information overloading), 所以当务之急还是围绕数据的AI方法,而不是什么重启”,他现在更加看好AI在社会计算、行为计算等方面的前景。
《MIT技术评论》英文原文:
COMPUTING
Unthinking Machines
Artificial intelligence needs a reboot, say experts
.
WEDNESDAY, MAY 4, 2011 BY STEPHEN CASS
Some of the founders and leading lights in the fields of artificial intelligence and cognitive science gave a harsh assessment last night of the lack of progress in AI over the last few decades.
During a panel discussion—moderated by linguist and cognitive scientist Steven Pinker—that kicked off MIT's Brains, Minds, and Machines symposium, panelists called for a return to the style of research that marked the early years of the field, one driven more by curiosity rather than narrow applications.
"You might wonder why aren't there any robots that you can send in to fix the Japanese reactors," said Marvin Minsky, who pioneered neural networks in the 1950s and went on to make significant early advances in AI and robotics. "The answer is that there was a lot of progress in the 1960s and 1970s. Then something went wrong. [Today] you'll find students excited over robots that play basketball or soccer or dance or make funny faces at you. [But] they're not making them smarter."
Patrick Winston, director of MIT's Artificial Intelligence Laboratory from 1972 to 1997, echoed Minsky. "Many people would protest the view that there's been no progress, but I don't think anyone would protest that there could have been more progress in the past 20 years. What went wrong went wrong in the '80s."
Winston blamed the stagnation in part on the decline in funding after the end of the Cold War and on early attempts to commercialize AI. But the biggest culprit, he said, was the "mechanistic balkanization" of the field, with research focusing on ever-narrower specialties such as neural networks or genetic algorithms. "When you dedicate your conferences to mechanisms, there's a tendency to not work on fundamental problems, but rather [just] those problems that the mechanisms can deal with," said Winston.
Winston said he believes researchers should instead focus on those things that make humans distinct from other primates, or even what made them distinct from Neanderthals. Once researchers think they have identified the things that make humans unique, he said, they should develop computational models of these properties, implementing them in real systems so they can discover the gaps in their models, and refine them as needed. Winston speculated that the magic ingredient that makes humans unique is our ability to create and understand stories using the faculties that support language: "Once you have stories, you have the kind of creativity that makes the species different to any other."
Emilio Bizzi, one of the founding members of MIT's McGovern Institute of Brain Research, agreed that researchers should focus on important elements of human intellect, such as the ability to generalize learning experiences, or fluidly plan movements to avoid obstacles to achieve a specific goal such as grasping a pair of glasses. "I am optimistic that in the next few years, we will make a lot of progress, and the reason is that there are many laboratories scattered in various parts of the world that are pursuing humanoid robotics."
The two linguists on the panel, Noam Chomsky and Barbara Partee, both made seminal contributions to our understanding of language by considering it as a computational, rather than purely cultural, phenomenon. Both also felt that understanding human language was the key to creating genuinely thinking machines. "Really knowing semantics is a prerequisite for anything to be called intelligence," said Partee.
Chomsky derided researchers in machine learning who use purely statistical methods to produce behavior that mimics something in the world, but who don't try to understand the meaning of that behavior. Chomsky compared such researchers to scientists who might study the dance made by a bee returning to the hive, and who could produce a statistically based simulation of such a dance without attempting to understand why the bee behaved that way. "That's a notion of [scientific] success that's very novel. I don't know of anything like it in the history of science," said Chomsky.
Sydney Brenner, who deciphered the three-letter DNA code with Francis Crick and teased out the complete neural structure of the c. elegans worm on a cellular level, agreed that researchers in both artificial intelligence and neuroscience might be getting overwhelmed with surface details rather than seeking the bigger questions underneath. Looking at attempts to replicate his mapping of the c. elegans neural "wiring diagram" with more complex organisms, Brenner worried that neuro- and cognitive scientists were being "overzealous" in these attempts. He said they should refocus on higher level problems instead. He used the analogy of someone taking a picture with a smart phone: no one today would bother to give a transistor-level description of such an action: it's much more useful to discuss the process in terms of higher level subsystems and software.
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-23 02:21
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社