||
在如今chatgpt实现了“图灵测试”之时,让我们来听听图灵于1951年5月15 日在 BBC 广播电台所做的充满探索精神的讲演,“数字计算机会思考吗?”
一,译文
数字计算机经常被描述为机械大脑,大多数科学家可能认为这种描述只是报纸的噱头,但有些人却不这么认为,一位数学家曾用“人们常说这些机器不是大脑,但你我都知道它们是大脑”这句话相当有力地向我表达了相反的观点。在这次演讲中,我将试图解释各种可能观点背后的想法,尽管并不完全公正。我最关注的是我自己所持的观点,即把数字计算机描述为大脑并非完全不合理。哈特里(Hartree)教授已经提出了不同的观点。
首先,我们可以考虑一下普通人的朴素观点,当听到了关于这些机器能做什么的惊人描述,其中大多数显然涉及他完全无法完成的智力壮举,那他只能通过假设机器是一种大脑来解释,尽管他可能宁愿不相信所到的。
大多数科学家都蔑视这种近乎迷信的态度,他们对机器的构造原理和使用方法有所了解。一百多年前,洛芙莱斯(Lovelace)女士在谈到巴贝奇的“分析引擎”(Analytical Engine)时,很好地概括了他们的观点,正如哈特里(Hartree)已经引述过的,她说: “分析引擎没有任何自命不凡的创举,我们知道如何命令它做什么,它就能做什么。”这句话很好地描述了目前数字计算机的实际使用方式,而且在未来许多年里,数字计算机可能也将主要以这种方式使用。对于任何一项计算,机器要经历的整个过程都是由数学家事先计划好的,对将要发生的事情的怀疑越少,数学家就越高兴,这就像计划军事行动一样。在这种情况下,可以公平地说,机器并没有创造任何东西。
不过,还有第三种观点,我自己也持这种观点。我同意洛芙莱斯女士的论断,但我相信它的有效性取决于考虑如何使用数字计算机,而不是它们被如何使用的。事实上,我认为它们的使用方式可以恰当地描述为大脑,我还应该说,“如果任何机器都可以恰当地描述为大脑,那么任何数字计算机都可以这样描述”。
最后一句话需要解释一下。它可能看起来相当令人吃惊,但在有所保留的情况下,它似乎是一个无法回避的事实。这可以从数字计算机的一个特性中得到证明,我把它称为“通用性(universality)”。数字计算机是一种通用机器,因为它可以用来取代某一类非常广泛的任何机器,它不会取代推土机、蒸汽机或望远镜,但它可以取代任何其他设计的计算机,也就是说,可以输入数据并随后打印出结果的任何机器。为了让我们的计算机模仿一台特定的机器,只需要对计算机进行编程,计算出这台机器在特定情况下会做什么,特别是会打印出什么样的答案,然后就可以让计算机打印出相同的答案。
如果现在某些特定的机器可以被描述为大脑,那么我们只需对数字计算机进行编程,让它模仿大脑,它也会成为大脑。如果我们承认动物特别是人类的真正大脑是一种机器,那么我们的数字计算机经过适当编程后,其行为也会像大脑一样。
这个论点涉及几个可以合理质疑的假设。 我已经解释过,要模仿的机器必须更像计算器而不是推土机,这仅仅反映了这样一个事实:我们谈论的是大脑的机械类似物,而不是脚或下巴。 这台机器的行为原则上应该可以通过计算来预测,这一点也是必要的。 我们当然不知道应该如何进行这样的计算,阿瑟·爱丁顿爵士甚至认为,由于量子力学中的不确定性原理,这样的预测在理论上是不可能的。
另一个假设是,所用计算机的存储容量应足以对要模仿的机器的行为进行预测,此外,计算机还应有足够的速度。我们现在的计算机可能不具备必要的存储容量,尽管它们很可能有足够的速度。这实际上意味着,如果我们想模仿像人脑这样复杂的东西,我们需要一台比现有计算机大得多的机器。我们可能至少需要比曼彻斯特计算机大一百倍的机器。当然,如果在信息存储技术方面取得足够的进展,同样大小或更小的机器也可以。
应该注意的是,所使用的计算机的复杂性没有必要增加。如果我们试图模仿越来越复杂的机器或大脑,我们就必须使用越来越大的计算机来完成这项工作。我们不需要一直使用更复杂的计算机。这看似矛盾,但解释起来并不困难。用计算机模仿机器,不仅需要我们制造出计算机,还需要我们对计算机进行适当的编程。要模仿的机器越复杂,程序就必须越复杂。
打个比方也许可以更清楚地说明这一点。假设有两个人都想写自传,其中一个人的一生多姿多彩,而另一个人的一生却碌碌无为。与另一个人相比,有两个困难会更严重地困扰着那个一生多事的人,他必须花更多的钱在纸张上,他必须花更多的心思思考该说些什么。纸张的供应不可能成为严重的困难,除非他身处荒岛,而且在任何情况下,这只能是一个技术或经济问题。另一个困难则更为根本,如果他不是在写自己的一生,而是在写他一无所知的东西,比方说火星上的家庭生活,那么这个困难就会变得更加严重。
让计算机像大脑一样运行的编程问题,就好比我们试图在荒岛上撰写这篇论文。我们无法获得所需的存储容量:换句话说,我们无法获得足够的纸张来撰写论文,而且无论如何,如果我们有了纸张,我们也不知道该写些什么。这种情况很糟糕,但继续打比方,我们应该知道如何书写,并了解大多数知识都可以体现在书本中这一事实。
有鉴于此,批评将数字计算机描述为“机械大脑”或“电脑”的最明智的理由似乎是,尽管可以通过编程让它们表现得像大脑一样,但我们目前还不知道应该如何做到这一点。我完全同意这种观点,至于我们最终能否成功找到这样的程序,还是个未知数。我个人倾向于相信会找到这样一个方案。例如,我认为很有可能在本世纪末,我们就可以给机器编程,让它回答问题,以至于人们很难猜出是人还是机器在回答问题。我在想象一种类似于口头考试的东西,但问题和答案都是打字机打出来的,这样我们就不必考虑人声模仿的忠实性等无关紧要的问题了。这仅代表我的观点,其他观点还有很大的空间。
仍然存在一些困难。大脑的行为似乎涉及到自由意志,但数字计算机的行为,当它被编程后,是完全被确定的,我们必须以某种方式调和这两个事实,但这样做似乎会让我们卷入“自由意志与决定论”这场旷日持久的争论。有两条出路,也许我们都有的自由意志只是一种幻觉,或者说,我们真的有自由意志,却无法从我们的行为中看出这一点。在后一种情况下,无论机器如何模仿人的行为,它都只能被看作是一种假象。我不知道我们怎样才能在这两种选择中做出决定,但无论哪种选择是正确的,可以肯定的是,模仿大脑的机器必须表现得好像有自由意志一样,而人们很可能会问,怎样才能做到这一点呢?一种可能性是让它的行为取决于轮盘或镭的供应。这些东西的行为也许是可以预测的,但如果是这样的话,我们就不知道如何进行预测了。
然而,其实根本没有必要这样做。要设计出在不了解机器构造细节的人看来行为相当随机的机器并不难。当然,无论使用哪种技术,加入这种随机因素并不能解决我们的主要问题,即如何编程让机器模仿大脑,或者我们可以更准确地说,让机器进行思考。但它给了我们一些指示,让我们知道这个过程会是怎样的。我们不能总是期望知道计算机会做什么,当机器给我们带来惊喜时,我们应该感到高兴,就像当一个学生做了一件没有明确教他做的事情时,人们会感到高兴一样。
现在,让我们重新考虑洛芙莱斯女士的论断,“机器可以做任何我们知道如何命令它去做的事情”,这段话后面的意思让人很想说,机器只能做我们知道如何命令它做的事情。但我认为这并不正确,当然,机器只能做我们命令它做的事情,其他任何事情都是机械故障。但是,我们没有必要假设,当我们向它下达命令时,我们知道在做什么,这些命令的后果是什么。我们不需要能够理解这些命令是如何导致机器随后的行为的,就像我们不需要理解把一粒种子放进土里的发芽机制一样,无论我们理解与否,植物都会生长出来。如果我们给机器一个程序,结果它做了一些我们没有预料到的有趣的事情,我倾向于说机器创造了一些东西,而不是说它的行为隐含在程序中,因此创造性完全在于我们。
关于如何完成“让机器思考的编程”过程,我不想多说,事实上,我们对此知之甚少,所做的研究也很少。有很多想法,但我们还不知道其中哪些是重要的,就像侦探小说中的情节一样,在调查开始时,任何琐事对调查者来说都可能是重要的。当问题得到解决后,只需要把基本事实告诉陪审团。但目前我们还没有什么值得提交给陪审团的证据,我只想说,我认为调查过程应与教学过程密切相关。
我试图解释支持和反对“机器可以思考”这一理论的主要理性论据,但也应该谈谈非理性论据。许多人都极力反对让机器思考的想法,但我认为这并不是因为我提出的任何理由,也不是因为任何其他理性的理由,而仅仅是因为他们不喜欢这个想法。我们可以看到许多使人不快的特征,如果机器会思考,它可能会比我们更聪明地思考,那么我们又该何去何从呢?即使我们能够让机器处于从属地位,例如在关键时刻关闭电源,我们作为一个物种也会感到非常自卑。同样的危险和屈辱也威胁着我们,因为我们有可能被猪或老鼠取代。
这是一种理论上的可能性,几乎没有什么争议,但我们和猪、老鼠一起生活了这么久,它们的智力却没有多大提高,所以我们不再为这种可能性而烦恼。 我们认为,即使这种情况真的发生,也需要几百万年的时间才能发生。但新的危险离我们近得多,如果它真的到来,几乎肯定会在下一个千禧年,虽然还远,但并不是天文上遥远的距离,而且肯定会让我们感到焦虑。
在关于这个议题的演讲或文章中,人们通常会说些特定的人类特征永远无法被机器模仿来提供一点安慰,例如,可以说没有机器可以写出好的英语,或者它不会受到性别吸引力或抽烟的影响。我无法提供任何这样的安慰,因为我相信无法设定这样的界限,但我当然希望并相信,不会付出巨大的努力来制造具有人类最独特但非智力特征的机器,例如人体的形状,在我看来,这种尝试是徒劳的,而且其结果会像人造花一样令人不快。 在我看来,制造思维机器的尝试属于不同的范畴,整个思维过程对我们来说仍然相当神秘,但我相信尝试制造一台思维机器将极大地帮助我们发现我们自己是如何思考的。
二,原文
https://www.cse.chalmers.se/~aikmitr/papers/Turing.pdf
Can Digital Computers Think?
Digital computers have often been described as mechanical brains. Most scientists probably regard this description as a mere newspaper stunt, but some do not. One mathematician has expressed the opposite point of view to me rather forcefully in the words ‘It is commonly said that these machines are not brains, but you and I know that they are.’ In this talk I shall try to explain the ideas behind the various possible points of view, though not altogether impartially. I shall give most attention to the view which I hold myself, that it is not altogether unreasonable to describe digital computers as brains. A different point of view has already been put by Professor Hartree.
First we may consider the naive point of view of the man in the street. He hears amazing accounts of what these machines can do: most of them apparently involve intellectual feats of which he would be quite incapable. He can only explain it by supposing that the machine is a sort of brain, though he may prefer simply to disbelieve what he has heard.
The majority of scientists are contemptuous of this almost superstitious attitude. They know something of the principles on which the machines are constructed and of the way in which they are used. Their outlook was well summed up by Lady Lovelace over a hundred years ago, speaking of Babbage’s Analytical Engine. She said, as Hartree has already quoted, ‘The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.’ This very well describes the way in which digital computers are actually used at the present time, and in which they will probably mainly be used for many years to come. For any one calculation the whole procedure that the machine is to go through is planned out in advance by a mathematician. The less doubt there is about what is going to happen the better the mathematician is pleased. It is like planning a military operation. Under these circumstances it is fair to say that the machine doesn’t originate anything.
There is however a third point of view, which I hold myself. I agree with Lady Lovelace’s dictum as far as it goes, but I believe that its validity depends on considering how digital computers are used rather than how they could be used. In fact I believe that they could be used in such a manner that they could appropriately be described as brains. I should also say that ‘If any machine can appropriately be described as a brain, then any digital computer can be so described.’
This last statement needs some explanation. It may appear rather startling, but with some reservations it appears to be an inescapable fact. It can be shown to follow from a characteristic property of digital computers, which I will call their universality. A digital computer is a universal machine in the sense that it can be made to replace any machine of a certain very wide class. It will not replace a bulldozer or a steam-engine or a telescope, but it will replace any rival design of calculating machine, that is to say any machine into which one can feed data and which will later print out results. In order to arrange for our computer to imitate a given machine it is only necessary to programme the computer to calculate what the machine in question would do under given circumstances, and in particular what answers it would print out. The computer can then be made to print out the same answers.
If now some particular machine can be described as a brain we have only to programme our digital computer to imitate it and it will also be a brain. If it is accepted that real brains, as found in animals, and in particular in men, are a sort of machine it will follow that our digital computer, suitably programmed, will behave like a brain.
This argument involves several assumptions which can quite reasonably be challenged. I have already explained that the machine to be imitated must be more like a calculator than a bulldozer. This is merely a reflection of the fact that we are speaking of mechanical analogues of brains, rather than of feet or jaws. It was also necessary that this machine should be of the sort whose behaviour is in principle predictable by calculation. We certainly do not know how any such calculation should be done, and it was even argued by Sir Arthur Eddington that on account of the indeterminacy principle in quantum mechanics no such prediction is even theoretically possible.
Another assumption was that the storage capacity of the computer used should be sufficient to carry out the prediction of the behaviour of the machine to be imitated. It should also have sufficient speed. Our present computers probably have not got the necessary storage capacity, though they may well have the speed. This means in effect that if we wish to imitate anything so complicated as the human brain we need a very much larger machine than any of the computers at present available. We probably need something at least a hundred times as large as the Manchester Computer. Alternatively of course a machine of equal size or smaller would do if sufficient progress were made in the technique of storing information.
It should be noticed that there is no need for there to be any increase in the complexity of the computers used. If we try to imitate ever more complicated machines or brains we must use larger and larger computers to do it. We do not need to use successively more complicated ones. This may appear paradoxical, but the explanation is not difficult. The imitation of a machine by a computer requires not only that we should have made the computer, but that we should have programmed it appropriately. The more complicated the machine to be imitated the more complicated must the programme be.
This may perhaps be made clearer by an analogy. Suppose two men both wanted to write their autobiographies, and that one had had an eventful life, but very little had happened to the other. There would be two difficulties troubling the man with the more eventful life more seriously than the other. He would have to spend more on paper and he would have to take more trouble over thinking what to say. The supply of paper would not be likely to be a serious difficulty, unless for instance he were on a desert island, and in any case it could only be a technical or a financial problem. The other difficulty would be more fundamental and would become more serious still if he were not writing his life but a work on something he knew nothing about, let us say about family life on Mars.
Our problem of programming a computer to behave like a brain is something like trying to write this treatise on a desert island. We cannot get the storage capacity we need: in other words we cannot get enough paper to write the treatise on, and in any case we don’t know what we should write down if we had it. This is a poor state of aVairs, but, to continue the analogy, it is something to know how to write, and to appreciate the fact that most knowledge can be embodied in books.
In view of this it seems that the wisest ground on which to criticise the description of digital computers as ‘mechanical brains’ or ‘electronic brains’ is that, although they might be programmed to behave like brains, we do not at present know how this should be done. With this outlook I am in full agreement. It leaves open the question as to whether we will or will not eventually succeed in finding such a programme. I, personally, am inclined to believe that such a programme will be found. I think it is probable for instance that at the end of the century it will be possible to programme a machine to answer questions in such a way that it will be extremely difficult to guess whether the answers are being given by a man or by the machine. I am imagining something like a viva-voce examination, but with the questions and answers all typewritten in order that we need not consider such irrelevant matters as the faithfulness with which the human voice can be imitated. This only represents my opinion; there is plenty of room for others.
There are still some difficulties. To behave like a brain seems to involve free will, but the behaviour of a digital computer, when it has been programmed, is completely determined. These two facts must somehow be reconciled, but to do so seems to involve us in an age-old controversy, that of ‘free will and determinism’. There are two ways out. It may be that the feeling of free will which we all have is an illusion. Or it may be that we really have got free will, but yet there is no way of telling from our behaviour that this is so. In the latter case, however well a machine imitates a man’s behaviour it is to be regarded as a mere sham. I do not know how we can ever decide between these alternatives but whichever is the correct one it is certain that a machine which is to imitate a brain must appear to behave as if it had free will, and it may well be asked how this is to be achieved. One possibility is to make its behaviour depend on something like a roulette wheel or a supply of radium. The behaviour of these may perhaps be predictable, but if so, we do not know how to do the prediction.
It is, however, not really even necessary to do this. It is not difficult to design machines whose behaviour appears quite random to anyone who does not know the details of their construction. Naturally enough the inclusion of this random element, whichever technique is used, does not solve our main problem, how to programme a machine to imitate a brain, or as we might say more brieXy, if less accurately, to think. But it gives us some indication of what the process will be like. We must not always expect to know what the computer is going to do. We should be pleased when the machine surprises us, in rather the same way as one is pleased when a pupil does something which he had not been explicitly taught to do.
Let us now reconsider Lady Lovelace’s dictum. ‘The machine can do whatever we know how to order it to perform.’ The sense of the rest of the passage is such that one is tempted to say that the machine can only do what we know how to order it to perform. But I think this would not be true. Certainly the machine can only do what we do order it to perform, anything else would be a mechanical fault. But there is no need to suppose that, when we give it its orders we know what we are doing, what the consequences of these orders are going to be. One does not need to be able to understand how these orders lead to the machine’s subsequent behaviour, any more than one needs to understand the mechanism of germination when one puts a seed in the ground. The plant comes up whether one understands or not. If we give the machine a programme which results in its doing something interesting which we had not anticipated I should be inclined to say that the machine had originated something, rather than to claim that its behaviour was implicit in the programme, and therefore that the originality lies entirely with us.
I will not attempt to say much about how this process of ‘programming a machine to think’ is to be done. The fact is that we know very little about it, and very little research has yet been done. There are plentiful ideas, but we do not yet know which of them are of importance. As in the detective stories, at the beginning of the investigation any triXe may be of importance to the investigator. When the problem has been solved, only the essential facts need to be told to the jury. But at present we have nothing worth putting before a jury. I will only say this, that I believe the process should bear a close relation of that of teaching.
I have tried to explain what are the main rational arguments for and against the theory that machines could be made to think, but something should also be said about the irrational arguments. Many people are extremely opposed to the idea of machine that thinks, but I do not believe that it is for any of the reasons that I have given, or any other rational reason, but simply because they do not like the idea. One can see many features which make it unpleasant. If a machine can think, it might think more intelligently than we do, and then where should we be? Even if we could keep the machines in a subservient position, for instance by turning oV the power at strategic moments, we should, as a species, feel greatly humbled. A similar danger and humiliation threatens us from the possibility that we might be superseded by the pig or the rat.
This is a theoretical possibility which is hardly controversial, but we have lived with pigs and rats for so long without their intelligence much increasing, that we no longer trouble ourselves about this possibility. We feel that if it is to happen at all it will not be for several million years to come. But this new danger is much closer. If it comes at all it will almost certainly be within the next millennium. It is remote but not astronomically remote, and is certainly something which can give us anxiety.
It is customary, in a talk or article on this subject, to offer a grain of comfort, in the form of a statement that some particularly human characteristic could never be imitated by a machine. It might for instance be said that no machine could write good English, or that it could not be influenced by sex-appeal or smoke a pipe. I cannot offer any such comfort, for I believe that no such bounds can be set. But I certainly hope and believe that no great efforts will be put into making machines with the most distinctively human, but non-intellectual characteristics such as the shape of the human body; it appears to me to be quite futile to make such attempts and their results would have something like the unpleasant quality of artificial flowers. Attempts to produce a thinking machine seem to me to be in a different category. The whole thinking process is still rather mysterious to us, but I believe that the attempt to make a thinking machine will help us greatly in finding out how we think ourselves.
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-11-23 00:58
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社