程京德(Jingde Cheng)的博 ...分享 http://blog.sciencenet.cn/u/JingdeCheng 相关逻辑,软件工程,知识工程,信息安全性工程;自强不息,厚德载物。

博文

“人工智能威胁人类论” (1) 谁,到底说了些什么?

已有 5553 次阅读 2015-2-1 09:23 |个人分类:人工智能|系统分类:科普集锦

[敬请读者注意] 本人保留本文的全部著作权利。如果哪位读者使用本文所描述内容,请务必如实引用并明白注明本文出处。如果本人发现任何人擅自使用本文任何部分内容而不明白注明出处,恕本人在网上广泛公布侵权者姓名。敬请各位读者注意,谢谢!


“人工智能威胁人类论” (1)   谁,到底说了些什么?

程京德


去年以来,霍金(Stephen Hawking)和马斯克(Elon Musk)两位名人面向媒体宣传“人工智能威胁人类论”,声称“彻底开发人工智能可能导致人类灭亡”,“人工智能是超过核武器的对人类最大威胁”,等等。今年1月12日,有他们两人参与签署的一封有关人工智能研究的公开信(Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter)由生命未来研究所(Future of Life Institute)向全世界公开,并且在网上征集签名,引起世界媒体广泛报道。一时间,中文媒体上也是一片关于“人工智能威胁人类论”的报道,似乎人类迟早有一天要被毁灭于自己的人工智能研究成果。率先报道的新浪科技报道说:“这封由未来生活研究所草拟的公开信警告称,如果智能机器缺乏监管,人类将迎来一个黑暗的未来。信中指出科学家需要采取措施,避免人工智能研究出现可能导致人类毁灭的风险。”

如果要对此事认真进行评论,那么,先搞搞清楚“到底是些谁,他们到底说了些什么?”应该是非常必要的。本人把那封公开信以及英国BBC、美国The Washington Post、新浪科技(中国国内原创)的新闻报道作为附录转载在下面,以供下一步讨论时作为原始参考文献。

本人观察到的一个有趣现象是:在签名名单中位于马斯克和霍金前面的(已经在公开信的下面列出),有许多世界知名的人工智能科学家,其中包括美国人工智能学会(AAAI)的现任会长和两位前任会长,欧美AI研究机构的负责人,著名AI教科书的作者,等等;但是,没有发现有哪位知名的计算机科学家尤其是理论计算机科学家身在其中(指在霍金前面的签名者中)。

如果一个研究领域的知名科学家们都参与到号召世界要警惕该领域的研究工作进展会对我们人类自身造成威胁的行列里来,这的确是应该认真看待的一件事情。本人将试图在后续博文中评论此事。

附录 [转载]

http://futureoflife.org/misc/open_letter

Research Priorities for Robust and Beneficial Artificial Intelligence: an Open Letter

Artificial intelligence (AI) research has explored a variety of problems and approaches since its inception, but for the last 20 years or so has been focused on the problems surrounding the construction of intelligent agents - systems that perceive and act in some environment. In this context, "intelligence" is related to statistical and economic notions of rationality - colloquially, the ability to make good decisions, plans, or inferences. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As capabilities in these areas and others cross the threshold from laboratory research to economically valuable technologies, a virtuous cycle takes hold whereby even small improvements in performance are worth large sums of money, prompting greater investments in research. There is now a broad consensus that AI research is progressing steadily, and that its impact on society is likely to increase. The potential benefits are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

The progress in AI research makes it timely to focus research not only on making AI more capable, but also on maximizing the societal benefit of AI. Such considerations motivated the AAAI 2008-09 Presidential Panel on Long-Term AI Futures and other projects on AI impacts, and constitute a significant expansion of the field of AI itself, which up to now has focused largely on techniques that are neutral with respect to purpose. We recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do. The attached research priorities document gives many examples of such research directions that can help maximize the societal benefit of AI. This research is by necessity interdisciplinary, because it involves both society and AI. It ranges from economics, law and philosophy to computer security, formal methods and, of course, various branches of AI itself.

In summary, we believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today.

List of signatories

Open letter signatories include:

Stuart Russell, Berkeley, Professor of Computer Scie~ce, director of the Center for Intelligent Systems, and co-author of the standard textbook Artificial Intelligence: a Modern Approach.

Tom Dietterich, Oregon State, President of AAAI, Professor and Director of Intelligent Systems

Eric Horvitz, Microsoft research director, ex AAAI president, co-chair of the AAAI presidential panel on long-term AI futures

Bart Selman, Cornell, Professor of Computer Science, co-chair of the AAAI presidential panel on long-term AI futures

Francesca Rossi, Padova & Harvard, Professor of Computer Science, IJCAI President and Co-chair of AAAI committee on impact of AI and Ethical Issues

Demis Hassabis, co-founder of DeepMind

Shane Legg, co-founder of DeepMind

Mustafa Suleyman, co-founder of DeepMind

Dileep George, co-founder of Vicarious

Scott Phoenix, co-founder of Vicarious

Yann LeCun, head of Facebook
s Artificial Intelligence Laboratory
Peter Norvig, Director of research at Google and co-author of the standard textbook Artificial Intelligence: a Modern Approach

Michael Wooldridge, Oxford, Head of Dept. of Computer Science, Chair of European Coordinating Committee for Artificial Intelligence

Leslie Pack Kaelbling, MIT, Professor of Computer Science and Engineering, founder of the Journal of Machine Learning Research

Tom Mitchell, CMU, former President of AAAI, chair of Machine Learning Department

Geoffrey Hinton, University of Toronto and Google Inc.

Toby Walsh, Univ. of New South Wales & NICTA, Professor of AI and President of the AI Access Foundation

Murray Shanahan, Imperial College, Professor of Cognitive Robotics

Michael Osborne, Oxford, Associate Professor of Machine Learning

David Parkes, Harvard, Professor of Computer Science

Laurent Orseau, Google DeepMind

Ilya Sutskever, Google, AI researcher

Blaise Aguera y Arcas, Google, AI researcher

Joscha Bach, MIT, AI researcher

Bill Hibbard, Madison, AI researcher

Steve Omohundro, AI researcher

Ben Goertzel, OpenCog Foundation

Richard Mallah, Cambridge Semantics, Director of Advanced Analytics, AI researcher

Alexander Wissner-Gross, Harvard, Fellow at the Institute for Applied Computational Science

Adrian Weller, Cambridge, AI researcher

Jacob Steinhardt, Stanford, AI Ph.D. student

Nick Hay, Berkeley, AI Ph.D. student

Jaan Tallinn, co-founder of Skype, CSER and FLI

Elon Musk, SpaceX, Tesla Motors

Luke Nosek, Founders Fund

Aaron VanDevender, Founders Fund

Erik Brynjolfsson, MIT, Professor at and director of MIT Initiative on the Digital Economy

Margaret Boden, U. Sussex, Professor of Cognitive Science

Martin Rees, Cambridge, Professor Emeritus of Cosmology and Astrophysics, Gruber & Crafoord laureate

Huw Price, Cambridge, Bertrand Russell Professor of Philosophy

Nick Bostrom, Oxford, Professor of Philosophy, Director of Future of Humanity Institute (Oxford Martin School)

Stephen Hawking, Director of research at the Department of Applied Mathematics and Theoretical Physics at Cambridge, 2012 Fundamental Physics Prize laureate for his work on quantum gravity

http://www.bbc.com/news/technology-30777834

12 January 2015Last updated at 12:07

Experts pledge to rein in AI research

Stephen Hawking: "Humans, who are limited by slow biological evolution, couldn't compete and would be superseded"


Scientists including Stephen Hawking and Elon Musk have signed a letter pledging to ensure artificial intelligence research benefits mankind.

The promise of AI to solve human problems had to be matched with safeguards on how it was used, it said.

The letter was drafted by the Future of Life Institute, which seeks to head off risks that could wipe out humanity.

The letter comes soon after Prof Hawking warned that AI could "supersede" humans.

Rampant AI

AI experts, robot makers, programmers, physicists and ethicists and many others have signed the open letter penned by the non-profit institute.

In it, the institute said there was now a "broad consensus" that AI research was making steady progress and because of this would have a growing impact on society.

Research into AI, using a variety of approaches, had brought about great progress on speech recognition, image analysis, driverless cars, translation and robot motion, it said.

Future AI systems had the potential to go further and perhaps realise such lofty ambitions as eradicating disease and poverty, it said.

However, it warned, research to reap the rewards of AI had to be matched with an equal care to avoid the harm it could do.

In the short term, this could mean research into the economic effects of AI to stop smart systems putting millions of people out of work.

In the long term, it would mean researchers ensure that as AI is given control of our infrastructure, restraints are in place to limit the damage that would result if the system broke down.

"Our AI systems must do what we want them to do," said the letter.

The dangers of a rampant AI answerable only to itself and not its human creators was spelled out in early December by Prof Hawking when he said AI had the potential to "spell the end of the human race."

Letting an artificially intelligent system guide its own development could be catastrophic, he warned in a BBC interview.

"It would take off on its own, and re-design itself at an ever increasing rate," he said.

http://www.washingtonpost.com/news/morning-mix/wp/2015/01/12/elon-musk-stephen-hawking-google-execs-join-forces-to-avoid-unspecified-pitfalls-of-artificial-intelligence/

Elon Musk, Stephen Hawking, Google researchers join forces to avoid pitfalls of artificial intelligence

By Justin Moyer January 12 2015

Its a scenario thats been outlined in countless science fiction films such as The Terminator,The Matrix and I, Robot: Machines defy their programming, kill humans and take over the world.

Now, some of the nations leading futurists including Tesla chief executive Elon Musk and folks from Googlehave put their digital John Hancocks to virtual paper, identifying ways to avoid the end of the world.

Or, at least, thats what it seems like the signatories are trying to do. The letter, put forth by the nonprofit Future of Life Institute a volunteer-run research and outreach organization working to mitigate existential risks facing humanity doesnt commit anyone to anything, and is quite a task to read. First, take a gander at the letters definition of AI.

“‘Intelligence is related to statistical and economic notions of rationality colloquially, the ability to make good decisions, plans, or inferences, the letter, called Research Priorities for Robust and Beneficial Artificial Intelligence, read. The adoption of probabilistic and decision-theoretic representations and statistical learning methods has led to a large degree of integration and cross-fertilization among AI, machine learning, statistics, control theory, neuroscience, and other fields. The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion, and question-answering systems.

As Neo might put it: Whoa. The letter is unspecific about the risks some theorists see in a world where machines are ascendant, including killer drones, mass unemployment, mass starvation and gray goo. Indeed, the letter, designed to bring attention to the dangers of artificial intelligence, barely manages to articulate them.

Instead, it focuses on the upside.

The potential benefits [of AI] are huge, since everything that civilization has to offer is a product of human intelligence; we cannot predict what we might achieve when this intelligence is magnified by the tools AI may provide, but the eradication of disease and poverty are not unfathomable,its clearest paragraph reads. Because of the great potential of AI, it is important to research how to reap its benefits while avoiding potential pitfalls.

Only in an attached document called Research priorities for robust and beneficial artificial intelligence does the Future of Life hint at what the pitfalls of AI could be.

·                                 If self-driving cars cut the roughly 40,000 annual US traffic fatalities in half, the car makers might get not 20,000 thank-you notes, but 20,000 lawsuits. In what legal framework can the safety benefits of autonomous vehicles such as drone aircraft and selfdriving cars best be realized?

·                                 Can lethal autonomous weapons be made to comply with humanitarian law?

·                                 How should the ability of AI systems to interpret the data obtained from surveillance cameras, phone lines, emails, etc., interact with the right to privacy?

·                                 How should an autonomous vehicle trade off, say, a small probability of injury to a human against the near-certainty of a large material cost?

Signatories include not just Musk and researcher Stephen Hawking, but representatives of IBM, Harvard and Massachusetts Institute of Technology professors, and the co-founders of DeepMind, the AI company Google bought last year.

If, as Musk has said, artificial intelligence is a demon that is potentially more dangerous than nuclear weapons, the Future of Life letter is like a SALT treaty with no strategy arms limitations. But some said that its at least a start.

The long-term plan is to stop treating fictional dystopias as pure fantasy and to begin readily addressing the possibility that intelligence greater than our own could one day begin acting against its programming,CNET wrote.

http://tech.sina.com.cn/d/i/2015-01-14/doc-icesifvy3684641.shtml

霍金等签发公开信:警惕人工智能潜在风险

2015年01月14日09:38   新浪科技

在一些人眼里,人工智能对人类构成的威胁甚至超过核武器。日前,包括物理学巨匠斯蒂芬-霍金和PayPal创始人伊隆-马斯克在内的一群科学家和企业家签发了一封公开信,承诺确保人工智能研究造福人类。

  特斯拉汽车公司和SpaceX公司掌门人马斯克。马斯克曾将研制自治可思考机器的做法描述为“召唤恶魔”。

  这封由未来生活研究所草拟的公开信警告称,如果智能机器缺乏监管,人类将迎来一个黑暗的未来。信中指出科学家需要采取措施,避免人工智能研究出现可能导致人类毁灭的风险。

  新浪科技讯 北京时间14日消息,据国外媒体报道,在一些人眼里,人工智能对人类构成的威胁甚至超过核武器。日前,包括物理学巨匠斯蒂芬-霍金和PayPal创始人伊隆-马斯克在内的一群科学家和企业家签发了一封公开信,承诺确保人工智能研究造福人类。这封由未来生活研究所草拟的公开信警告称,如果智能机器缺乏监管,人类将迎来一个黑暗的未来。

  未来生活研究所的公开信指出科学家需要采取措施,避免人工智能研究出现可能导致人类毁灭的风险。公开信作者表示人们普遍认为人工智能研究正快速取得进步,将对整个社会产生越来越大的影响。信中称语言识别、图像分析、无人驾驶汽车、翻译和机器人运动都成为人工智能研究的受益者。

  作者们表示:“潜在的效益是巨大的。文明的每一个产物都是人类智慧的结晶。我们无法预测在人工智能技术大幅提高机器智商时我们的文明将达到怎样的程度,但根除疾病和贫困是一个难以预测的过程。”他们同时也警告称在进行人工智能研究的同时必须相应地采取防范措施,避免人工智能给人类社会造成潜在伤害。在短期内,人工智能技术将让数百万人失业。在长期内,人工智能可能潜在地让社会走向反乌托邦,机器的智商远远超过人类,做出违背编程的举动。

  公开信说:“我们研发的人工智能系统必须做我们希望它们做的事情。很多经济学家和计算机学家认为非常有必要进行研究,确定如何在让人工智能所能带来的经济效益实现最大化的同时减少负面影响,例如加剧不公平和失业。”除了霍金和马斯克外,在公开信上签名的人还包括机器智能研究所的执行理事卢克-穆豪瑟尔,麻省理工学院物理学教授和诺贝尔奖得主弗朗克-韦尔切克。

  在这封公开信发表前几周,霍金教授曾警告称人工智能将在未来的某一天取代人类。他在伦敦接受英国广播公司采访时表示:“人工智能技术的研发将敲响人类灭绝的警钟。这项技术能够按照自己的意愿行事并且以越来越快的速度自行进行重新设计。人类受限于缓慢的生物学进化速度,无法与之竞争和对抗,最终将被人工智能取代。”

  霍金曾在2014年初指出成功研发人工智能将成为人类历史上犯的最大错误。不幸的是,这也可能是最后一个错误。2014年11月,特斯拉汽车公司和SpaceX公司掌门人马斯克警告称由于机器采用人工智能,“一些极为危险的事情即将发生”,最快将在5年内。此前,他还曾将研制自治可思考机器的做法描述为“召唤恶魔”。

  10月,马斯克在麻省理工学院AeroAstro百年学术研讨会上发表讲话将人工智能称之为“我们面临的最大威胁”。他说:“我认为我们必须对人工智能抱着非常谨慎的态度。在我看来,人工智能可能是我们面临的最大威胁。因此,我们必须非常谨慎地进行人工智能研究。我越发坚信我们需要对人工智能的研究进行适当监管,可能是在国家层面和国际层面,确保我们不会做出一些非常愚蠢的事情。随着人工智能的发展,我们将召唤出恶魔。也许你会认为我们可以用类似五芒星和圣水的东西控制恶魔,但事实根本不是这样。”(孝文)




https://blog.sciencenet.cn/blog-2371919-864574.html

上一篇:计算机与计算模式 (1) 何谓计算机?
下一篇:“人工智能威胁人类论” (2) 合理地定义“人工智能”
收藏 IP: 219.111.183.*| 热度|

3 李颖业 刘洋 李亚平

该博文允许实名用户评论 评论 (4 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-4-19 11:19

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部