||
当我在为我的书《图灵注释:艾伦·图灵关于可计算性和图灵机的历史性论文导览》(Wiley,2008 年)做研究时,对图灵为计算所做出的深远贡献而赞赏,注意到1956 年发表的一篇关于计算机的文章(不是图灵写的)中的一句话:
[如果]一台设计用于微分方程数值解的机器的基本逻辑与一台用于为百货公司结账的机器的逻辑相吻合,我会认为这是我遇到过的最惊人的巧合。
1956 年的这篇文章的作者并不傻,他就是霍华德·艾肯(Howard Aiken),他从 1937 年开始就参与数字计算机的研究,也是 IBM 开创性的哈佛 Mark I 的主要思想家。如果艾肯区分了计算机硬件和软件,而这些软件只是针对不同类型的应用进行了稍微的定制,我们可能会原谅他。但是他没有,他显然是在谈论数字计算的“基本逻辑”,到 1956 年,他确实应该知道得更多。
现在让我们听听艾伦·图灵在 1950 年《心灵》杂志上发表的文章“计算机器和智能”(这是著名的“图灵测试”文章)中所说的话:
数字计算机的这种特殊属性,即它们可以模仿任何离散状态机,被描述为通用机器。具有这种属性的机器的存在具有重要的意义,即除了考虑速度之外,无需设计各种新机器来执行各种计算过程。它们都可以用一台数字计算机完成,并针对每种情况进行适当的编程。可以看出,因此,所有数字计算机在某种意义上都是等价的。
今天是艾伦·图灵诞辰 100 周年,他是第一个真正以现代方式理解这些概念的人。图灵的理解可以追溯到 1936 年,当时他 24 岁,发表了一篇论文,题目令人望而生畏,名为《论可计算数及其在判定问题中的应用》。这篇论文表面上是为了回答德国数学家大卫·希尔伯特在一阶谓词逻辑领域提出的一个问题,但远远超出了这个直接目标。
阿兰·图灵的思维方式与众不同,他对其他人解决数学问题的方式不太感兴趣。今天,他可能会被诊断为阿斯伯格综合症(至少在某种程度上),但最让我印象深刻的是他如何能够理解所有有意识的人类普遍存在的心理过程的本质。
为了解决希尔伯特在数理逻辑中的问题,图灵深入自我,分析了执行任何数值过程(例如长除法)所涉及的步骤。然后,他将这些操作分解为简单的抽象步骤。这些步骤如此微不足道,很难想象它们可以变得多么简单,但它们却将处理数字的通用过程形式化:在数学公式的每个步骤中,您可能会写下一个符号(或字母或数字),稍后可能会将其删除或替换为另一个符号。您可以检查特定位置存在的符号,并以此为基础确定下一步。
图灵展示了如何将这些单独的简单步骤合并到一个包含数值过程的表中。这样的表格被称为图灵机,它是我们所知的“算法”的抽象表达。图灵表明,这种抽象机器等同于任何数字计算机(即使当时数字计算机还不存在),也等同于解决这些相同问题的人类思维。图灵比任何人都更了解数字计算与硬件的关系远小于与软件的关系。
然而,数字计算机之间的这种基本等价性是一把双刃剑:所有数字计算机在计算能力上都同样强大,但也同样存在不足。图灵论文的一个重要成果是,有些问题超出了任何数字计算机(包括我们自己头脑中的计算机)的范围。最大的缺陷在于将软件工具应用于自身:我们永远无法确定任意计算机程序是否能正确运行,也无法编写能够成功调试任何其他程序的程序。在有人真正制造出数字计算机之前,图灵已经证明了它们的内在局限性!
对于我们这些程序员来说,艾伦·图灵 1936 年关于可计算数的论文是我们的基础文献。但它的普遍性揭示了远远超出编程的含义,它继续帮助我们理解自己、我们的思想以及宇宙的信息本质。
在艾伦·图灵诞辰 100 周年之际,我们知道,当我们凝视图灵机时,图灵机也在凝视着我们。
原文:
https://www.charlespetzold.com/blog/2012/06/Alan-Turings-Contribution.html
Alan Turing’s Contribution
Turing Day: June 23, 2012Roscoe, N.Y.
As I was researching my book The Annotated Turing: A Guided Tour through Alan Turing's Historic Paper on Computability and the Turing Machine (Wiley, 2008), my appreciation for Turing's profound contribution to computing was brought into focus by this sentence from an article (not by Turing) about computers published in 1956:
[I]f it should turn out that the basic logics of a machine designed for the numerical solution of differential equations coincide with the logics of a machine intended to make bills for a department store, I would regard this as the most amazing coincidence I have ever encountered.
The author of that 1956 article was no dummy. It was Howard Aiken, who had been involved with digital computers since 1937 and who was the primary mind behind IBM's seminal Harvard Mark I. We might forgive Aiken if he had made a distinction between computer hardware or software merely somewhat tailored for different types of applications. But no. He's clearly talking about the "basic logics" of digital computing, and by 1956 he really should have known better.
Now let's hear Alan Turing in his article "Computing Machinery and Intelligence" (this is the famous "Turing Test" article) for a 1950 issue of Mind:
This special property of digital computers, that they can mimic any discrete state machine, is described by saying that they are universal machines. The existence of machines with this property has the important consequence that, considerations of speed apart, it is unnecessary to design various new machines to do various computing processes. They can all be done with one digital computer, suitably programmed for each case. It will be seen that as a consequence of this all digital computers are in a sense equivalent.
Today is the 100th anniversary of the birth of Alan Turing, who was the first person who really understood these concepts in a modern way. Turing's understanding dated from 1936, when at the age of 24 he published a paper with the forbidding title "On Computable Numbers, with an Application to the Entscheidungsproblem." This ostensible purpose of this paper was to answer a question posed by German mathematician David Hilbert in the field of first-order predicate logic, but it went far beyond that immediate goal.
Alan Turing had a mind that worked unlike that of anyone else, and he wasn't much interested in the way that other people solved mathematical problems. Today, he would probably be diagnosed with Asperger's syndrome (at least to some degree), but what impresses me most is how he was able to understand the nature of mental processes that are universal among all sentient humans.
To solve Hilbert's question in mathematical logic, Turing went deep into himself and analyzed the steps involved in carrying out any numerical process (for example, long division). He then broke down these operations into simple abstract steps. These steps are so trivial, it's hard to imagine how much simpler they could be, and yet they formalize the universal process of working with numbers: during each step of a mathematical recipe, you might write down a symbol (or letter or number), and you might later erase it or replace it with another symbol. You examine what symbol exists in a particular place, and base your next step on that.
Turing shows how these individual simple steps can be consolidated in a table that encapsulates the numerical process. Such a table came to be known as the Turing Machine, and it is an abstract formulation of what we know call an "algorithm." Turing shows that this abstract machine is equivalent to any digital computer (even though at the time, digital computers did not yet exist) as well as to human minds working on these same problems. More than anyone else, Turing understood that digital computing has much less to do with hardware than with software.
Yet, this fundamental equivalence among digital computers is a double-edged sword: All digital computers are computionally equally capable, but also equally deficient. It is a crucial outcome of Turing's paper that there are certain problems beyond the reach of any digital computer, including the computer inside our own heads. The big deficiency involves applying the tool of software to itself: We can never be sure that an arbitrary computer program will run correctly, and we can't write a program that can successfully debug any other program. Before anyone had actually built a digital computer, Turing had already demonstrated their intrinsic limitations!
For those of us who are programmers, Alan Turing's 1936 paper on computable numbers is our foundational document. But its universality has revealed implications far beyond programming, and it continues to contribute to our understanding of ourselves, of our minds, and of the informational nature of the universe.
On this 100th anniversary of Alan Turing's birth, we know that as we stare into the Turing Machine, the Turing Machine stares back at us.
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-12-27 01:01
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社