Lunalin的个人博客分享 http://blog.sciencenet.cn/u/Lunalin

博文

译文: 寻求统一:量子场论史笔记 斯蒂芬·温伯格

已有 19504 次阅读 2019-5-25 20:16 |个人分类:科普|系统分类:科普集锦| 翻译, 量子力学, 量子场论, 科学史

简介

量子场论是关于物质及其相互作用的理论,该理论源于二十世纪20年代量子力学与狭义相对论的融合。之后数年量子场论的名声在物理学家中经历了频繁起伏,有一段时间量子场论名声极差,几乎被完全摒弃。但是现在,尤其是在上一个时代(译注1)取得的一些列惊人成就,量子场论成为攻克物理基本问题最被广泛接受的概念及数学构架。如果过些年能够发现一套所谓终极自然定律(当然不太可能发生),这套终极自然定律很可能是以量子场论语言描述的。

据我所知现在还没有对过去50年量子场论完整历史的介绍。现今的现代物理史非常完整地覆盖了狭义和广义相对论,对量子力学早期历史也有很好介绍,但这些物理史对量子力学一般只介绍到1927年左右统计解释成功为止。这很令人遗憾,不只是因为量子场论所具有的基本特征,而且认识量子场论发展史有助于深入洞悉科学发展规律。

普遍认为科学发展是通过大规模或小型革命产生的。按照这种观点,上次革命的成功会给科学家的思维套上一种语言,思想和教条枷锁,科学家必须挣脱枷锁才能让科学更进一步。至于这些革命多大程度是由于个别天才科学家能够超越时代局限导致的,还是由于当前理论与实验间积累起来的差异造成的,仍然是个争议话题。但无论如何人们普遍认可科学发展的主要因素在于做出告别过去的决定。

我不反对这种观点,科学史上许多重大进展都是这么发生的。本世纪物理方面伟大的进展狭义相对论和量子力学--也是这样。然而从1930年以来量子场论的发展却是个相反的例子,量子场论的主要发展是人们一次一次意识到革命是不需要的。如果把量子力学和相对论比作1789年法国大革命或1917年俄国大革命,那量子场论的规模就像1688年发生的光荣革命:只发生有限度改变,总体保持原样。

我下面尽量不使用数学公式来讲述这个故事。我们将会看到量子场论起源于欧洲,主要在德国和英国,二次世界大战后在日本和美国由新一代理论物理学家复兴。上一个时代活动的中心在美国,但许多从欧洲和亚洲来的物理学家做出了卓越贡献。虽然各个国家物理学存在不同风格,但在量子场论发展史中只起微小作用。物理学发展方向不取决于国家,社会或文化背景,而是取决于事物自身逻辑认识自然本质的需求。

本文不是量子场论史,只是历史笔记。专业科学史家需要做大量工作来展示上半世纪理论物理发生的故事,如果本文能够激励专业人士从事这一未完成的任务,我将倍感欣慰。 

史前时期:场论和量子理论

量子场论之前有场论和量子理论。一些才华卓越的科学史家已经多次讲述了这些早期理论的历史,我下面只是提醒读者一些要点。

第一个经典场论是基于牛顿的重力理论。牛顿本人没有提过场对他来说重力是每对宇宙物质粒子间的作用力,正比于固体物质质量,在各方传播到遥远距离,总是按距离的平方减弱。十八世纪数学物理学家发现用重力场替代远距离相互作用更为方便,即在空间每一点定义一个数值(严格说是一个向量),该数值决定作用在该点处任何粒子的重力,这个重力包括所有其他点处物质粒子作用力的贡献。这其实只是一种数学处理方法,对牛顿重力理论而言,无论人们说地球吸引月球,或说地球产生重力场,该重力场作用于月球所在位置从而维持月球轨道运行,两种说法没有区别。

场的概念真正走向历史舞台是源于十九世纪发展出的电磁理论。事实上“场”这一概念就是由迈克尔·法拉第于1849年引入物理学的。虽然仍然可以像库仑和安培那样把电磁力当作电荷或电流间的直接作用力,但是引入由宇宙中所有电荷和电流产生,并反过来作用于每个电荷和电流的电场和磁场,则更为自然。尤其是当詹姆斯·克拉克·麦克斯韦证明电磁波以一定速度传播,速度大小等于光速后,这一概念几乎无法避免。此刻作用于我眼睛视网膜上电子的作用力并不是此刻太阳原子内电流产生的,而是大约八分钟前由电磁波—光波产生的,只不过现在刚抵达我的眼睛。(我们后面会看到理查德·费曼和约翰·威勒在1945年试图用远距离作用来解释这种电磁作用力的滞后,但他们的想法不可行,他们转而寻求其他更有前景的方法)。

麦克斯韦并没有接受场在我们宇宙中独立存在的现代观念,而是(至少是刚开始)将电场和磁场想像为在承载介质 -- 以太中的扰动,就像在橡皮膜上的拉伸一样。这种认识意味着实验中应该观测到电磁波的速度应该取决于观测者相对于以太的运行速度,就像在橡皮膜上观测弹性波取决于观测者相对于橡皮膜的速度。麦克斯韦认为他的场方程只适用于相对于以太静止的这一特殊体系。电磁波借助以太传播这一观念一直持续到20世纪,尽管人们尝试寻找的地球围绕太阳运行时穿过以太会产生的运动效果一次次以失败告终。

虽然以太问题没有解决,但场的观念已经越来越深入物理学家心目之中。事实上越来越多物理学家认为物质本身最终也是电磁场的一种体现,1900年到1905年间由约瑟夫·约翰·汤姆逊,威廉·维恩,亚伯拉罕,约瑟夫·拉莫尔,亨得里克·安顿·洛伦兹以及亨利·庞加莱发展的电子理论探索了该设想。最终阿伯特·爱因斯坦于1905年提出的狭义相对论让电磁理论彻底摒弃了以太。观测事件时空坐标随观测者速度改变而改变的规则被重新定义。这些新的规则经过特别设计无论观测者速度如何,其观测光速等于采用麦克斯韦理论计算的速度。爱因斯坦理论破灭了所有试图测出在以太中运行效应的希望,虽然以太在理论学家脑海中仍然留恋徘徊了一阵,但最终还是消失殆尽,只留下电磁场本身 -- 只有橡皮膜上的拉伸,但没有橡皮膜。

正是由于对电磁现象的研究导致量子理论和狭义相对论的兴起。到19世纪末,很明显经典电磁理论和统计力学已经不能描述加热不透明体辐射出的各种波长的电磁辐射能量。问题在于经典理论对高频预测的能量太高,能量高到所有波长每秒辐射的总能量为无穷大! 1900年12月15日马克斯·卡尔·恩斯特·路德维希·普朗克在给德国物理协会读的一篇论文中提出来一个解决方法。我们很值得花些时间仔细了解一下普朗克提出的方法,不只是因为该方法直接导致现代量子力学,而且理解该方法有助于认识量子场理论究竟是什么。

普朗克设想加热物体内的电子可以以所有可能频率前后振动,就像一把拥有大量弦的小提琴,弦具有所有可能长度。当电子在某频率下振动向电磁场释放能量或从电磁场接收能量时,发生给定频率下的辐射发射和吸收。不透明体在任何频率下每秒辐射的能量因而取决于电子在此特定频率下振动的平均能量。

正是由计算这一平均能量让普朗克做出了他革命性设想。他设想任何振动模式产生的能量都是量子化的 -- 也就是说不可能有像经典力学那样可以有伴随任何能量的振动,而是只有伴随特定所容许的能量。具体来说普朗克假定任何两个连续容许能量差值对给定振动模式总是一致的,等于振动频率乘以自然界一个新的常量,后来人们称之为普朗克常量。按照普朗克设想,极高频振动模式所允许的状态能量差别极大,因而需要极大能量来激发这一振动模式。但是统计力学规则告诉我们随着能量的增加,在任何一个振动模式找到巨大能量的概率迅速降低。所以极高频振动平均能量一定随频率增加而迅速降低,加热物体所辐射的能量也一定随辐射频率增加而迅速降低,这样就避免了总辐射能量无穷大的灾难。

普朗克并没有将能量量子化的观点用于辐射本身(乔治·伽莫夫这样描述普朗克的观点:“辐射就像黄油,只能以四分之一磅一包的包装从食品店购买或退回食品店,虽然黄油本身多么重都可以。”)是爱因斯坦于1905年提出辐射以能束形式发生,后来人们称之为光子,每个光子能量正比于其频率。

1913年尼耳斯·玻尔在他的原子光谱理论中结合了普朗克和爱因斯坦的观点。就像普朗克工作中所假想的振动模式,玻尔理论中的原子被假想为以定态存在,每一定态对应一确定能量,但能量一般不是等分的。当一激发态原子跃迁到低能量定态时发射出确定能量的光子,能量大小等于原子初始态和最后态之间能量差。每一确定光子能量对应一个确定频率,当我们观看荧光灯或恒星的光谱亮线时,我们看到的正是这些频率的展示。

从普朗克到玻尔以及接下来一个时代发展起来的早期量子理论基本上是基于猜想,应用一些特殊数学技巧,其成功之处在于对原子和辐射行为所做出的解释。量子理论最终成为被称为量子力学的系统科学学科是由于1925年到1926年期间路易·德布罗意,沃纳·海森堡,马克斯·玻恩,帕斯库尔·约尔当,沃尔夫冈·泡利,保罗·阿德里安·毛里斯·狄拉克,以及埃尔温·薛定谔所做出的贡献。理论学家应用量子力学可以重新回到确定物质系统容许能级问题,重新推导出玻尔首先发现的成功结果。但是尽管源于热辐射理论,量子力学仍然只是系统的研究物质粒子 -- 原子中的电子,而不是辐射本身。

量子场论的诞生 

最先将1925年到1926年期间发展出的新量子力学应用到场而不是粒子的是来自于一篇创建量子力学本身的论文。1926年玻恩,海森堡以及约尔当将注意力转向没有电荷和电流真空中的电磁场。他们的工作最好通过与普朗克1900年的热辐射理论对比来更好地理解。

前面介绍普朗克将受热物体内电子的运动采用了理想化处理,电子被替换为无限多个简单振动模式,就像一把拥有大量弦的小提琴,弦具有所有可能长度。他接着提出任何一个振动模式所允许能量差等于该模式振动频率乘以普朗克常量。1925年到1926年期间发展出的新量子力学其中一个成果是证实了普朗克猜想:新量子力学证明简单振子,像一根小提琴的弦,其能量确实是量子化的,与普朗克的猜想完全一致。用于推出这一结果,简单振动方式关键特征在于导致振子改变任何振幅所需能量正比于振幅的平方--就像我们越来越难将小提琴琴弦从平衡位置拉的更远。

电磁场也基本一样 -- 场内任何一个振动模式的能量正比于场强的平方,从某种意义上说正比于与真空中的常态间“位移”的平方。这样将描述物质振子同样的数学方法应用到电磁场,波恩等人可以证实电磁场每个振动模式的能量都是量子化的 – 能量允许值间相差一个基本能量单位,大小等于该模式振动频率乘以普朗克常数。这个结果的物理解释显而易见。最低能态为无辐射真空,能量可以赋为0。稍高一级能态能量等于频率乘以普朗克常量,可以诠释为具有该能量的单一光子能态。再下一个能态具有两倍能量,因而可以诠释为含有两个相同能量的光子,依此类推。这样将量子力学应用到电磁场至少把爱因斯坦光子观点付诸于严格数学基础。

玻恩,海森堡以及约尔当只是处理真空中的电磁场,所以虽然他们的工作很有启发性,但并没有能够给出重要的定量预测。最先“实际”应用量子场论的是保罗·阿德里安·毛里斯·狄拉克在1927年发表的一篇论文。那时狄拉克在致力于攻克一个旧问题:如何计算处于激发态原子释放电磁辐射跃迁到低能态的速率。难点不在于给出答案 -- 玻恩和约尔当以及狄拉克本人早已猜想出正确的方程。问题是用量子力学的数学结果来理解这一猜想方程。这一问题至关重要,因为自发辐射过程实际上产生了“粒子”。辐射之前系统包含一个处于激发态原子,但辐射之后系统包含一个处于较低能态的原子,以及一个光子。如果量子力学不能处理产生和泯灭过程,那量子力学不能是一个完整的理论。

此过程的量子力学理论最好通过回到场与振子的类比来理解。在不存在与原子的相互作用情况下,电磁场就像全部彼此隔绝小提琴琴弦的集合;无论给任何振动模式多大的能量,或者说一个特定频率多少光子数,该能量将永远保持不变。同样如果原子不与辐射相互作用,那它将永远保持初始能态。但是原子会与辐射相互作用,因为电子带有电荷。所以真实的类比是与一套比如被小提琴共鸣板弱耦合在一起的小提琴琴弦类比。音乐家都知道当一个振子发生振动后会发生什么 -- 其能量将逐渐向其他振动模式传递直到它们全部被激发。对量子力学这不会逐渐发生(因为能量是量子化的),而是几率逐渐增加,初始储集在原子中的能量将在电磁场中呈现 -- 换句话说产生了光子。

狄拉克对自发辐射的成功处理证实量子力学的普适性。然而人们仍然设想世界由两种非常不同成分组成 -- 粒子和场,虽然两者都可以用量子力学来描述,但是需要采用非常不同的方式。像电子和质子这样的物质粒子被认为是永恒的;为了描述一个系统的物理状态,人们需要描述在给定区域和一定速度范围内发现每个粒子的几率。另一方面人们认为光子只是底层量子化电磁场的体现,可以自由产生和泯灭。

很快从这一并不令人满意的双成分观点衍生出趋向真正统一的自然观念。关键几步是1928年约尔当和尤金·维格纳发表的论文,以及1929年到1930年间海森堡和泡利发表的两篇长论文走出的(恩里科·费米也于1929年提出了不同方法)。他们指出物质粒子可以理解为不同场的量子,就如光子是电磁场的量子一样。每种基本粒子都有一个场。这种观念认为宇宙里存在一系列场 -- 电场,质子场,电磁场 -- 粒子被降级为只是一种表象。该观念精华部分流传至今,形成量子场论的中心思想:现实本质上是一系列场,服从狭义相对论和量子力学原理;其他都是这些场量子动态的结果。

这种将场理论应用到物质的方法意味着如果有足够大的能量,应该可以产生物质粒子,就像当原子失去能量时产生光子一样。1932年费米应用这种量子场论构想出原子核β衰变过程理论。自从贝克勒于1896年发现含有铀盐的晶体会让感光底板感光后,人们认识到原子核会产生多种放射性衰变。这些衰变的其中一种模式 -- β衰变 -- 原子核释放出一个电子,改变自身的化学属性。二十世纪20年代人们认为原子核由质子和电子组成,所以过不久一个电子就从原子核释放的观点并不很矛盾。然而1931年保罗·埃伦费斯特和尤利乌斯·罗伯特·奥本海默提出一个新颖观点(虽然是间接论证),原子核事实上并不包含电子,1932年海森堡提出原子核包含质子和新发现的中性粒子--中子。神秘之处在于原子核发生衰变时电子是从哪里来的。费米给出的答案是电子来自于与激发态原子衰变中的光子同样的地方--电子是在衰变中产生的--通过电子场与质子,中子以及一种假想粒子中微子场的相互作用。

1930年之后量子场论走向现代形式还需要解决一个问题。狄拉克在1928年构想单个电子前场论理论时发现他的方程存在相应于电子处于负能量的解,也就是说能量小于真空中的零能量。为了解释为什么普通电子不会落入这些负能量态,狄拉克1930年提出几乎所有这些态已经被充满,那些没有被充满态,或者说负能量电子海洋中的“洞”将像正能量粒子,与普通电子类似,但与电子电性相反--正而不是负。狄拉克最初认为这些“反粒子”是质子,但是1932年在宇宙射线发现的正电子揭示了这种新粒子的本质。

狄拉克理论容许粒子产生和湮灭,即使不用引入量子场论的观点。赋予足够能量,一个负能量电子可以升到正能量态,相应产生一个正电子(负能量海洋中的洞)和一个普通电子。当然湮灭过程也会发生。狄拉克本人一直不接受需要用量子场论来描述除光子外任何其他粒子的观念。然而1934年温德尔·福瑞和奥本海默以及泡利和维克多·韦斯考普夫发布的两篇论文表明量子场论可以自然结合反物质观点,而不需要引入没有观测到的负能量粒子,而且可以令人满意地描述粒子和反粒子的产生和湮灭。对大多数理论物理学家来说问题已得以解决,现今人们把粒子和反粒子视为各种量子场相同的量子。

重要的是量子场论不只对粒子给出了新的观念,而且对它们之间的作用力也给出了新的认识。我们可以不用生成古典电磁场来描述两个带电粒子远距离相互作用,而是采用从一个粒子到另一个粒子间不断交换光子。同样其他力也可以通过交换其他粒子来生成。这些交换粒子被称为虚粒子,在交换过程中不能被直接观测到,因为它们生成真实粒子(比如一个自由电子转变为一个光子和一个电子)将违背能量守恒定律。然而根据量子力学不确定原理,一个系统在短期能量对应很高不确定性,所以这些虚粒子可以在物理过程的中间态生成,但是必须很快被再次重新吸收。

基于这种思路人类可以推导出通过交换某种粒子产生的力作用范围(超出此距离作用力将迅速减弱)与交换粒子质量成反比。光子质量为0,作用范围无穷远,即大家熟悉的反平方库仑力。原子核内质子和中子间作用力作用范围小于一厘米的百万百万分之一,因而汤川秀树1936年可以预测出存在一种全新粒子--介子,其质量是电子质量的几百倍。在计算这些力时人们假设一个点的能量密度不只是场平方之和(像处理非耦合简单机械振子那样),而是也包括在该点不同场(以及变化速度)值的乘积。这种多场相互作用是未知数,需要理论和实验共同努力来解决。站在量子场论角度,有关物质是由什么粒子构成的,以及它们之间的作用力问题都得以解决 -- 真正问题是决定基本量子场是什么,以及它们之间的相互作用力是什么。 

无穷大问题

我前面描述的量子场论早期阶段看上去就像节节胜利。这其实某种程度上有失偏颇,因为理论几乎从初期开始就存在自洽问题。

问题首先出现在1930年奥本海默发表的一篇试图计算电子与量子电磁场相互作用对原子中电子能量的影响论文中。正如两个电子交换虚光子在两个电子之间产生作用能量,在量子场论中发出虚光子以及被同一电子再次吸收产生一种自能,其大小可能取决于电子所占据原子轨道,在原子能级上可能呈现可观测差异。遗憾的是奥本海默发现量子场论预测能级差异为无限大。

这里之所以产生无穷大问题是由于当一个原子中的一个电子短暂变成一个光子和一个电子,这两个粒子以无穷种方式共享初始电子的动量。电子自能包括其动能可以被共享全部形式的和,而由于动能没有上限,这个和成为无穷大。并不是一定会这样,毕竟有许多数学序列的例子,将无限多个数字加起来可以得到一个确定结果。(比如 1+1/2+1/4+1/8+…)。然而奥本海默发现电子自能更像序列1+1+1+1+…,很难将其解释为一个确定值。

几年后问题有所改进,韦斯考普夫包括了下面一个进程效果,即一个虚电子,正电子,和光子从真空中产生,正电子和光子与初始电子一起湮灭,在终态剩下新的电子为真实粒子。这个进程对自能的贡献消掉了奥本海默发现的初始无穷大最严重的一部分,但是剩下的自能以虚动能之和形式存在,类似于序列1+1/2+1/3+1/4+…,仍然不能将其解释为一个确定值。

其他方面也发现无穷大问题,比如通过应用电场产生的“真空极性”,以及原子电场产生的电子散射。(这种令人沮丧局面下一个亮点来自于处理伴随极低动能光子的无穷大,1937年费利克斯·布洛赫和阿诺德·诺德斯克证明这些红外无穷大在碰撞总速率上可以全部消掉)。当然如果采用一个理论计算一个可观测量得出的答案是无穷大,那完全可以得出结论认为或者计算时犯了错误,或者该理论不是一个好的理论。二十世纪30年代人们普遍认为量子场论不是一个好的理论,可以作为一个权宜之计来应用,但需要加入全新内容来完善。

无穷大问题激励人们提供新的观点来完善量子场论。二十世纪30和40年代人们尝试的一些观点包括:

  1. 1938年海森堡提出存在一个基本能量单位,量子场论只适用于能量小于该基本能量单位情形。这类似于其他基本常数--普朗克常数和光速。当能量和频率比值小到接近普朗克常数,量子力学开始起作用,当速度大到接近光速时需要用到狭义相对论。同样海森堡提出当能量超过基本能量单位时可能需要用到全新的物理理论,该理论机制可以消掉高能量虚粒子的贡献,从而避免无穷大问题。二十世纪30年代人们观测到高能宇宙射线产生的带电粒子流与量子电动力学描述不同,这支持了海森堡的观点。后来人们意识到这种偏差是由于新粒子介子造成的,而不是由于量子场论的失败。

  2. 约翰·阿奇博尔德·惠勒于1937年以及海森堡与1943年分别提出一种实证主义方法试图替代量子场论,该理论有时被称为“S矩阵理论”,只包含可以直接测量的量。他们指出实验不能让我们观测到原子中的电子或者碰撞过程究竟发生了什么。实验只可能测量能量和像原子这种束缚系统中其他一些属性,以及各种碰撞过程的概率。这些量遵循某些普遍原理,比如实在,概率守恒,依赖能量,守恒原理等等,应该用这些普遍原理来替代量子场论中的假设。

  3. 狄拉克1942年提出量子力学应该扩展包括负概率态,不能在任何物理过程的初始态和最终态出现,但是必须包括在这些过程的中间态中。在计算中间态共享系统动能求和时引入了负号,这样可以达到一个确定答案,就像1-1/2+1/3-1/4+…是一个确定值(等于2的自然对数),但是1+1/2+1/3+1/4+…则是无穷大。

  4. 前面说过理查德·费曼和约翰·威勒在1945年考虑完全放弃场论的可能性,用超距直接作用替代粒子间通过场作用。

这些观点有些保留至今,成为理论物理的重要组成部分。特别是S矩阵理论在只涉及可观测量的所谓“色散关系”发展中得以兴盛。负概率态现在也是很方便的数学工具,特别对处理虚光子极性非常有用。然而这些观点都不是解决无穷大问题的关键。

解决无穷大问题并不需要多数理论物理学家所期待的那样革命性变革。前面说过奥本海默,沃勒,以及韦斯考普夫发现一个电子的能量从“虚”光子发射和重新吸收中接受无穷大贡献。这种无穷大自能不只出现在电子在一个原子内进行轨道运行之时,而且也出现在其处于真空静止之时。狭义相对论告诉我们一个处于静止状态粒子能量与其质量相关,即著名的方程E = mc2。这样物理数据表中找到的电子质量不应该只是“裸”质量 -- 出现在我们电子场方程中的量,而是必须认为是裸质量“加”电子与其自身虚光子云作用产生的无穷大“自质量”。这表明裸质量自身可能是无穷大的,这个无穷大正好消掉了自质量中的无穷大,剩下一个实际可以观测到的确定总质量。当然提出像裸质量这种出现在我们基本场方程中的量是无穷大也是无奈之举;但是毕竟我们永远不可能关闭电子虚光子云来测量裸质量,所以不会产生矛盾。同样方法也可以应用到其他物理参数,比如电子的电荷。是否可能不只当我们计算总质量和电荷时裸质量和裸电荷中的无穷大可以消掉量子场论中的无穷大,在所有其他计算中也会一样哪?这种通过重新定义物理参数消除无穷大的方法被称为重整化。

重整化方法是韦斯考普夫于1936年提出的,二十世纪40年代中期克莱默斯也提出同样设想。然而人们不能确定该方法是否有效。要通过重新定义物理参数消除无穷大,无穷大必须以特定形式出现,来校正这些观测参数。比如为了将奥本海默发现的原子能级无穷大差异吸收入电子质量的重新定义,自能量无穷大部分对所有能级应该一致。二十世纪30年代和40年代的数学方法不足以找出所有可能计算中所有的无穷大来检查它们是否可以通过重整化来消除。更重要的是没有急迫的理由这样做 -- 没有什么实验数据强迫理论物理学家解决这些问题。当然物理学家在1939年到1945年间也有其他事忙。

在谢尔特岛重新复兴

1947年6月1日在纽约州靠近长岛末端的一个小岛 -- 谢尔特岛连续4天召开了关于量子力学基础的会议。参加会议的有新一代美国年轻物理学家 -- 他们的科学事业始于战争期间在洛斯阿拉莫斯和麻省理工学院的工作,活跃于二十世纪30年代的较老一代物理学家也参加了会议。年轻一代中有威利斯·兰姆,他是一位实验物理学家,当时在由伊西多·艾萨克·拉比在哥伦比亚大学创建的一个非常杰出的物理组工作。兰姆宣布了一个漂亮的实验结果,他和他的一个学生雷瑟福首次测量了氢原子中电子自能效果。

氢原子理论最早由尼耳斯·玻尔于1913年提出,并于1925年到1926年间由量子力学置于严格的数学基础之上,最后由海森堡和约尔当,达尔文,以及狄拉克校正包括了相对论和电子自旋效应。在该理论的最后一个形式,特别是在狄拉克给出的方程中原子某些成对激发态能量应该相等。(这些成对的态相应于两种不同的电子自旋以及与其环绕原子核的角动量合并得到确定总角动量的方式)。但是该理论没有考虑电子与其自身电磁场的作用效果—即奥本海默曾试图计算但发现无穷大的效果。如果这种效果确实存在,它们应该导致这些成对的态的能量发生改变,它们将不再会完全相等。

这正是兰姆和雷瑟福所发现的。通过应用从战争期间用于雷达而发展出的处理微波辐射新技术,他们实验得出根据狄拉克1928年理论应该相等的氢的前两个激发态能量(2S1/2和2P1/2)存在百万分0.4的差别。这被称为兰姆移位。

部分受兰姆结果的激励,谢尔特岛会议参与者进入激烈的理论讨论。我那时不在谢尔特岛(刚刚上高中),我不能追踪那段时间发展出的不同量子场论方程的历史发展过程。希望科学史家可以收集谢尔特岛以及接下来一些会议参与者的回忆,阅览那段时间写下的论文,做出综合论述。我这里将只简述这期间的一些成果。

汉斯·贝特率先计算了兰姆移位,我相信是在从谢尔特岛回来的火车上。使用质量重整化来消除无穷大,他得到与兰姆宣布的值非常接近的结果。然而正如贝特自己也承认,这只是近似计算,包含与狭义相对论不完全吻合的近似。

二十世纪40年代后期至少产生了3种量子场论重构方法,这些方法完全是相对论性的,而且简洁,漂亮,可以系统性处理无穷大。其中一种方法其实在谢尔特岛会议会议之前就已经由朝永振一郎与他的日本同事做出,但是我相信在1947年夏天他们的工作还不为美国所知。另两种方法由谢尔特岛会议参加者朱利安·施温格和理查德·费曼贡献。

费曼的工作带来一套图像规则,人们可以将一个确定数量与每个动量和能量如何通过碰撞过程的中间状态变化图景联系起来:过程发生几率由这些单个量和的平方给出。费曼规则远远不仅是方便的计算工具,因为费曼规则结合了量子场论一个重要特征—粒子和反粒子的对称。费曼图每条线既可以代表一个粒子从线的一端产生在另一端湮灭,也可以代表一个反粒子的反向过程。正是这种同等处理粒子和反粒子使得费曼图在计算的每个阶段计算量独立于观测者的速度,满足狭义相对论的要求。韦斯考普夫在很久之前已经证明中间态包括反粒子对降低无穷大程度起着至关重要的作用,可以将类似1+2+3+…这种灾难性程度降低到可控的类似1+1/2+1/3…这种程度。费曼图自动消掉了无穷大最严重部分,余下可以用重整化方面去掉的可控无穷大部分。

1947年年尾之前施温格用自己的方法在我看来首次计算了电子虚光子云的另一个效应—电子反常磁矩。狄拉克1928年理论的一个成就是对电子磁矩的预测,电子磁矩表示电子与磁场相互作用的强度,以及其自身磁场的强度。然而哥伦比亚大学于1947年揭示电子磁矩比狄拉克预测值大一点,大出千分之1.15到1.21。施温格将虚光子的无穷大效果吸收到电子电荷的重整化中,计算得出确定磁矩,比狄拉克值只大千分之1.16。

当然1947年以来实验和理论方面确定兰姆移位和电子反常磁矩都有了巨大进步。比如当今电子反常磁矩的实验值比狄拉克值大千分之1.15965241,而理论计算电子反常磁矩为千分之1.15965234, 误差分别为千分之0.00000020和千分之0.00000031。理论和实验相近程度只能说令人惊奇。

最后弗里曼·戴森于1949年证明施温格和朝永振一郎方法可以同样得出费曼发现的图像形式。戴森也对费曼图中的无穷大进行了分析,大致给出这些无穷大总是那类可以通过重整化去除的证明。作为一名二十世纪50时代的博士生,我是通过阅读戴森那些美妙流畅的论文学会了量子场论新方法。

我要指出施温格,朝永振一郎,费曼和戴森理论不是一个新的物理理论。它就是海森堡,泡利,费米,奥本海默,福瑞和韦斯考普夫的旧量子场论,只不过计算起来更加方便,而且对像质量和电荷这些物理参数进行了更加实际的定义。十五年不断完善,旧量子场论这种持续的生命力令人印象深刻。

这引发一个有趣的历史问题。1947到1949年间那些辉煌日子所计算的所有效果其实在1934年之后如果不能实际计算的话,至少可以估算。确实,没有重整化观念,答案会是无穷大,但是至少可以估算像兰姆移位的量级和校正电子磁矩。(像1+1/2+1/3+1/4+…序列增大很慢,1百万项之后其和仍然小于14.4。)不只是这些没有做 — 多数理论学家似乎认为这些量为0!实际上在1938年已经发现了后来称为兰姆移位的一些现象,但据我所知没有哪位理论学家去检查所报告能量分裂量级是否符合量子场论。

为什么量子场论没有被认真对待?其中一个原因是狄拉克1928年理论的巨大成功,该理论可以如此完美解释氢光谱的精细机构,不需要包括自能量效应。更重要的是在许多物理学家心目中无穷大的出现完全败坏了量子场论名声。我认为最深层的原因是心理方面的难度,科学史家还没有足够认识这方面。理论学家在桌边玩弄的方程与实际原子光谱和碰撞过程的现实之间似乎存在巨大距离。跨过这一间隙,意识到思维产物和数学可能与真实世界有一定联系需要一定勇气。当然当一个科学分支在顺利发展之时,理论与实验之间不断相互促进,人们习惯于认识到理论与现实有一定关系。没有实验数据的压力,实现起来很难。兰姆移位之发现其成就的伟大不主要是它强迫我们改变我们的物理理论,更重要的是它强迫我们认真对待它们。

强弱相互作用

1949年过后几年对量子场论的热情处于很高水平。许多理论学家期待它将很快可以让人们认识所有微观现象,不只是对光子,电子和正电子的认识。然而很快人们的信心再次失落 – 物理证券交易所量子场论股暴跌,这开启了第二次大萧条,持续了几乎有二十年。

部分问题在于重整化方法应用有限。要使像质量和电荷这样的物理参数通过重整化将所有无穷大都消掉,这些无穷大必须以有限的几种方式产生,例如对质量,电荷等等的校正,但不能是其他方式。戴森的工作证明这只适合一小类量子场论,被称为可重整化理论。最简单的光子,电子和正电子理论(即量子电动力学)是可重整化的,但多数理论不是。

不幸的是物理现象中有重要一类显然不能用可重整化理论描述。这就是弱相互作用,它导致在上面量子场论诞生部分中介绍过的原子核β衰变。费米在1932年发明了弱相互作用理论,经过一些修正,可以足够描述最低级近似中所有弱相互作用现象,即在计算跃迁率时只包括一个简单的费曼图。然而一旦把该理论扩展到下一级近似,将产生不能用重新定义物理量可以去掉的无穷大。

另一个主要问题在于1947年到1949年间应用的近似手段只经过有限证实。物理过程被用费曼图无限求和来代表,每一个费曼图代表一个特定中间态序列,包括一定不同类型粒子数量。对每一个费曼图我们赋予一个数量,进程速率为这些量和的平方。在量子电动力学中相应于复杂费曼图的量值非常小,对每个额外光子线有一个被称为精细结构常数的额外因子,一个很小的数,约为1/137。费米的弱相互作用理论相应因子更小--在典型基本粒子物理能量下为10-5到10-7之间。正是由于伴随复杂费曼图贡献值的快速递减才使得量子电动力学的计算可以达到如此令人惊叹的精度。

然而除了电磁和弱相互作用,基本粒子物理中还有另一类相互作用,即强相互作用。正是强相互作用将原子核结合在一起,抵消原子核内质子间的静电斥力。 对于强相互作用,相应于精细结构常数的因子大小尺度在1左右,而不是1/137,因而复杂费曼图与简单费曼图一样重要。(当然这也是为什么这种相互作用强的原因)。这样虽然人们不断尝试运用量子场论来计算原子力,但从来没能获得令人信服的定量结果。1947年以来物理学家们先在宇宙射线,然后在加速实验室发现了新强相互作用粒子介子和超子,人们满腹热情的首先采用量子场论来研究它们间的强相互作用,但没能取得定量成功。并不是因为考虑用重整化量子场论或许可能解释强相互作用有什么难度--而是考虑到这样一种理论,并没有方法来应用这样理论得出可靠定量预测,并测试是否正确。

弱相互作用场论的不可重整化,以及强相互作用场论的无用导致二十世纪50年代对量子场论产生的广泛失落。一些理论物理学家转向研究对称原理和守恒律,这些原理和定律可以不需要详细计算直接应用到物理现象。另一些物理学家捡起惠勒和海森堡的旧S矩阵理论来发展只涉及可观测量的强相互作用物理原理。这两种方法都不时用到量子场论最为基本原理指导,但不用其作为定量计算的基础。

当今由于过去十年众多物理学家的努力,量子场论又一次重现二十世纪40年代的辉煌,成为理解基本粒子进程的主要手段。弱和强相互作用量子场论被称为规范理论,该理论不存在不可重整化和不可计算的老问题,而且在某种程度上更加准确。我们现在任处于复兴中期,我这里不介绍其历史,我将只总结新理论是如何克服量子场论的老问题的。

新理论的核心是用曾经成功描述电磁相互作用的旧量子场论几乎同样方式来描述弱和强相互作用。就像带电粒子间的电磁相互作用是通过交换光子产生的一样,弱相互作用是通过交换被称为中间玻色子的粒子产生的,而强相互作用是通过交换胶子粒子产生的。所有这些粒子,包括电子,中间玻色子以及胶子都有相等的自旋,作用都遵循被称为规范对称的强大对称原理。(规范对称原理指出当场发生某种改变,产生位置和时间的变化,其基本方程形式不发生变化)由于这些理论与量子电动力学如此相似,因而它们拥有共同基本特性,可以重整化。事实上弱相互作用与电磁相互作用不只类似 – 新理论统一了这两种作用,将光子和中间玻色子场处理为一个单一场家族成员。

中间玻色子不像光子那样无质量,其质量可能是质子或中子质量的70到80倍。这个超大质量不是由于光子和中间玻色子场有什么本质差别,而是由于解场方程时底层场理论对称破缺造成的。中间玻色子家族(光子是其一员)包含一个重带电粒子及其反粒子,被称为W+和W-,以及一个更重的被称为Z0的中性粒子。W粒子的交换产生大家熟悉的弱相互作用,比如原子核β衰变,但Z0的交换会产生一种新弱相互作用 -- 作用粒子不会发生电荷改变。这种中性流1973被发现,其属性与理论预测完全一致。中间玻色子太重,现在的加速器还无法产生,但很可能过不久就可以通过质子和反质子的碰撞产生这些中间玻色子。

相反传递强相互作用的胶子质量可能为0。这种零质量胶子理论有一个特别的属性--渐近自由,即在特高能或非常短距离胶子作用强度逐渐降低。这样现在可能可以应用量子场论来计算高能下强相互作用。特别是可能可以解释比如高能电子-核子散射过程中观测到的一些现象。

正如胶子相互作用在高能和短距时会变弱,同样在低能和长距时它们会变强。为此人们广泛相信(虽然还没有证实)具有的粒子 -- 胶子与其相互作用,正如光子与电荷相互作用 -- 不能被作为单独自由粒子产生。有色粒子包括胶子自身,这可能正是胶子从来没有被作为真实粒子观察到的原因。人们相信有色粒子也包括夸克,悉尼·德威尔在代达罗斯(译注2)双期刊发表的文章中对此进行了讨论。所观测到的强作用粒子,比如中子,质子和介子相信应该是复合态,由夸克,反夸克和胶子组成,没有净色荷。这幅图景代表场论相对物质粒子论的近乎完胜:基本实体是夸克和胶子场,不对应任何可观测粒子,那些观测到的强作用粒子根本不是最基本的,它们只是底层量子场论的表现。

有希望实现将弱相互作用,电磁作用和强相互作用统一的规范理论。光子,中间玻色子和胶子将形成一个单一场家族成员。为此在此家族中应该有其他场,对应超高质量粒子。根据一种估算,这种新粒子的质量可能是质子质量的1017倍。质量如此之高,这些粒子的重力场不再可能像对待其他粒子物理那样忽略不计。

不幸的是虽然到目前为止历经巨大努力,仍然没能找到一个令人满意的(例如可重整化的)重力量子场论。遗憾的是重力,这个最早的经典场论,仍然不能整合到量子场论的框架之内。

在这个历史介绍中我着重强调了重整化条件—即那些可以通过重新定义一少部分物理参数而消除量子场论中所有无穷大的要求。许多物理学家可能不认可这个重点,确实有可能最终发现所有的量子场论--可重整化或不可重整化--都一样令人满意。然而在我看来可重整化的要求正是我们所需要的基本物理理论的那种约束。只有很少的可重整化量子场论。比如可以构建这样的电磁场量子场论,电子可以有任何我们喜欢的磁矩,但是这些理论中只有一个,即对应于磁矩为狄拉克值的1.0011596523倍的理论可以被重整化。另外我们已经看到弱相互作用的可重整化理论经历很长时间后才被发现。我们非常需要像可重整化这样一个指导原则来帮助我们从无穷种量子场论中挑选真实世界的量子场论。这样如果可重整化最终被其他一些条件所替代,我希望是那种一样或更严格的条件。毕竟我们并不只是想描述我们所发现的世界,我们还要做出最大可能的解释它为什么是那种样子。 

译注1:本文写于1977年,上一个时代指二十世纪60年代

译注2:代达罗斯期刊是麻省理工学院出版社发行的同行评议学术期刊,创始于1955年。 

STEVEN WEINBERG

The Search for Unity: Notes for a History of Quantum Field Theory 

Introduction

Quantum field theory is the theory of matter and its interactions, which grew out of the fusion of quantum mechanics and special relativity in the late 1920s. Its reputation among physicists suffered frequent fluctuations in the fol lowing years, at times dropping so low that quantum field theory came close to be abandoned altogether. But now, partly as a result of a series of striking suc cesses over the last decade, quantum field theory has become the most widely accepted conceptual and mathematical framework for attacks on the fundamental problems of physics. If something like a set of ultimate laws of nature were to be discovered in the next few years (an eventuality by no means expected), these laws would probably have to be expressed in the language of quantum field theory. 

To the best of my knowledge, there does not exist anything like a full history of the past fifty years of quantum field theory. Existing histories of modern physics cover special and general relativity pretty thoroughly, and they take us through the early years of quantum mechanics, but their treatment of quantum mechanics generally ends with the triumph of the statistical interpretation around 1927. This is a pity, not only because of the fundamental nature of quantum field theory, but also because its history offers an interesting insight to the nature of scientific advance. 

It is widely supposed that progress in science occurs in large or small revolutions. In this view, the successes of previous revolutions tend to fasten upon the scientist's mind a language, a mind-set, a body of doctrine, from which he must break free in order to advance further. There is great debate about the degree to which these revolutions are brought about by the individual scientific genius, able to transcend the fixed ideas of his times, or by the accumulation of discrepancies between existing theory and experiment. However, there seems to be general agreement that the essential element of scientific progress is a decision to break with the past. 

I would not quarrel with this view, as applied to many of the major advances in the history of science. It certainly seems to apply to the great revolutions in physics in this century: the development of special relativity and of quantum mechanics. However, the development of quantum field theory since 1930 provides a curious counterexample, in which the essential element of progress has been the realization, again and again, that a revolution is unnecessary. If quantum mechanics and relativity were revolutions in the sense of the French Revolution of 1789 or the Russian Revolution of 1917, then quantum field theory is more of the order of the Glorious Revolution of 1688: things changed only just enough so that they could stay the same. 

 I will try to tell this story here without using mathematics. To some extent, I will let the history do the job of explicationI will go into some of the historical developments more fully than others, because they help to introduce ideas which are needed later. Even so, this is not an easy taskthere is no branch of natural science which is so abstract, so far removed from everyday notions of how nature behaves, as quantum field theory.

This cannot be a story of physics in one country. As we shall see, quantum field theory had its birth in Europe, especially in Germany and Britain, and was revived after World War II by a new generation of theoretical physicists in Japan and the United States. The United States has been somewhat the center of the intense activity of the last decade, but physicists from many countries in Europe and Asia have made essential contributions. And although there are discernible national styles in physics, they have played only a minor role in this history. It is not the national or the social or the cultural setting that has determined the direction of research in physics, but rather the logic of the subject itself, the need to understand nature as it really is. 

This article is not a history of quantum field theory, but only "notes for a history." A great deal of work needs to be done by professional historians of science in uncovering the story of the last half-century of theoretical physics. I would be delighted if this article were to spur someone to take on this overdue task. 

Prehistory: Field Theory and Quantum Theory 

Before quantum field theory came field theory and quantum theory. The history of these earlier disciplines has been told again and again by able historians of science, so I will do no more here than remind the reader of the essential points. 

The first successful classical field theory was based on Newton's theory of gravitation. Newton himself did not speak of fieldsfor him, gravitation was a force which acts between every pair of material particles in the universe, "according to the quantity of solid matter which they contain and propagates on all sides to immense distances, decreasing always as the inverse square of the dis tances."1 It was the mathematical physicists of the eighteenth century who found it convenient to replace this mutual action at a distance with a gravitational field, a numerical quantity (strictly speaking, a vector) which is defined at every point in space, which determines the gravitational force acting on any particle at that point, and which receives contributions from all the material particles at every other point. However, this was a mere mathematical facon de parler it really made no difference in Newton's theory of gravitation whether one said that the earth attracts the moon, or that the earth contributes to the general gravitational field, and that it is this field at the location of the moon which acts on the moon and holds it in orbit.

 Fields really began to take on an existence of their own with the development in the nineteenth century of the theory of electromagnetism. Indeed, the word "field" was introduced into physics by Michael Faraday in 1849. It was still possible for Coulomb and Ampere to consider electromagnetic forces as acting directly between pairs of electric charges or electric currents, but it became very much more natural to introduce an electric field and a magnetic field as conditions of space, produced by all the charges and currents in the universe, and acting in turn on every charge and current. This interpretation became almost unavoidable after James Clerk Maxwell demonstrated that electromagentic waves travel at a finite speed, the speed of light. The force which acts on electrons in the retina of my eye at this moment is not produced by the electric currents in atoms on the sun at the same moment, but by an electromagnetic wave, a light wave, which was produced by these currents about eight minutes ago, and which has only now reached my eye. (As we shall see, an attempt was made in 1945 by Richard Feynman and John Wheeler to account for this retardation of electromagnetic forces in an action-at-a-distance frame work, but the idea did not catch on, and they both went on to more promising work).      

Maxwell himself did not yet adopt the modern idea of a field as an independent inhabitant of our universe, with as much reality as the particles on which it acts. Instead (at least at first) he pictured electric and magnetic fields as disturbances in an underlying medium. the aether like tension in a rubber mem brane. This would have had one obvious experimental implication?the observed speed of electromagnetic waves would depend on the speed of the observer with respect to the aether, just as the observed speed of elastic waves in a rubber membrane depends on the speed of the observer relative to the mem brane. Maxwell himself thought that his own field equations were in fact valid in only one special frame, at rest with respect to the aether.3 The notion of an aether that underlies the phenomena of electromagnetism persisted well into the twentieth century, despite the repeated failure of experimentalists to discover any effects of the motion through the aether of the earth as it revolves about the sun.

But even with the problem of the aether unresolved, the idea of a field as an entity in its own right grew stronger in physicists' minds. Indeed, it became popular to suppose that matter itself is ultimately a manifestation of electric and magnetic fields, a theme explored in the theories of the electron developed in 1900-1905 by Joseph John Thomson, Wilhelm Wien,4 M. Abraham, Joseph Larmor, Hendrik Antoon Lorentz, and Jules Henri Poincar. Finally, in 1905 the essential element needed to free electromagnetic theory from the need for an aether was supplied by the special theory of relativity of Albert Einstein. New rules were given for the way that the observed space and time coordinates of an event change with changes in the velocity of the observer. These new rules were specifically designed so that the observed speed of a light wave would be just that speed calculated in Maxwell's theory, whatever the velocity of the observer. Einstein's theory removed any hope of detecting the effects of motion through the aether, and although the aether lingered on in theorists' minds for a while, it eventually died away, leaving electromagnetic fields as things in them selves. the tension in the membrane, but without the membrane.

As it happens, it was the study of electromagnetic phenomena that gave rise to the quantum theory as well as to special relativity. By the end of the nineteenth century, it was clear that the classical theories of electromagnetism and statistical mechanics were incapable of describing the energy of electromagnetic radiation at various wavelengths emitted by a heated opaque body. The trouble was that classical ideas predicted too much energy at very high frequencies, so much energy in fact that the total energy per second emitted at all wavelengths would turn out to be infinite! In a paper read to the German Physical Society on December 15, 1900, a resolution of the problem was pro posed by Max Karl Ernst Ludwig Planck.9 It will be worthwhile for us to con centrate on Planck's proposal for a moment, not only because it led to modern quantum mechanics, but also because an understanding of this idea is needed in order to understand what quantum field theory is about.

 Planck supposed that the electrons in a heated body are capable of oscillating back and forth at all possible frequencies, like a violin with a huge number of strings of all possible lengths. Emission or absorption of radiation at a given frequency occurs when the electron oscillations at that frequency give up ener gy to or receive energy from the electromagnetic field. The amount of energy being radiated per second by an opaque body at any frequency therefore de pends on the average amount of energy in electron oscillations at that particular frequency.

It was in calculating this average energy that Planck made his revolutionary suggestion. He proposed that the energy of any mode of oscillation is quantized. that is, that it is not possible to set an oscillation going with any desired energy, as in classical mechanics, but only with certain distinct allowed values of the energy. More specifically, Planck assumed that the difference between any two successive allowed values of the energy is always the same for a given mode of oscillation, and is equal to the frequency of the mode times a new constant of nature which has come to be called Planck's constant. It follows that the allowed states of the modes of oscillation of very high frequency are widely separated in energy, so that it takes a great deal of energy to excite such a mode at all. But the rules of statistical mechanics tell us that the probability of finding a great deal of energy in any one mode of oscillation falls off rapidly with in creasing energy; hence the average energy in oscillations of very high frequency must fall off rapidly with the frequency, and the energy radiated by a heated body must also fall off rapidly with the frequency of the radiation, thus avoiding the catastrophe of an infinite total rate of radiation.

Planck was not ready to apply the idea of energy quantization to radiation itself. (George Gamow10 has described Planck's view as follows: "Radiation is like butter, which can be bought or returned to the grocery store only in quarter-pound packages, although the butter as such can exist in any desired amount.") It was Einstein11 who in 1905 proposed that radiation comes in bundles of energy, later called photons, each with an energy proportional to the frequency.

In 1913 the ideas of Planck and Einstein were brought together by Niels Bohr in his theory of atomic spectra. Like the hypothetical modes of oscillation in Planck's work, atoms in Bohr's theory are supposed to exist in distinct states with certain definite energies, but not generally equally spaced. When an excited atom drops to a state of lower energy, it emits a photon with a definite energy, equal to the difference of the energies of the initial and final atomic states. Each definite photon energy corresponds to a definite frequency, and it is these frequencies that we see vividly displayed when we look at the bright lines crossing the spectrum of a fluorescent lamp or a star.

This early quantum theory, from Planck to Bohr and for a decade after, was inspired guesswork, ad hoc mathematical manipulation, justified by its brilliant success in explaining the behavior of atoms and radiation. Quantum theory became the coherent scientific discipline known as quantum mechanics, through the work of Louis de Broglie, Werner Heisenberg, Max Born, Pascual Jordan, Wolfgang Pauli, Paul Adrian Maurice Dirac, and Erwin Schroedinger in 1925 1926. Armed with this formalism, theorists were able to go back to the problem of determining the allowed energy levels of material systems, and to repro duce the successful results first found by Bohr. But despite its origins in the theory of thermal radiation, quantum mechanics still dealt in a coherent way only with material particles electrons in atoms and not with radiation itself.

The Birth of Quantum Field Theory

The first application of the new quantum mechanics of 1925-1926 to fields rather than particles came in one of the founding papers of quantum mechanics itself. In 1926, Born, Heisenberg, and Jordan14 turned their attention to the electromagnetic field in empty space, in the absence of any electric charges or currents. Their work can best be understood by an analogy with Planck's 1900 theory of thermal radiation.

Planck, it will be recalled, had treated the motion of the electrons in a heated body in terms of an idealized picture, in which the electrons were replaced with an unlimited number of modes of simple oscillation, like a violin with a huge number of strings of all possible lengths. He had further proposed that the allowed energies of any one mode of oscillation were separated by a definite quantity, equal to the frequency of the mode times Planck's constant. One of the products of the new quantum mechanics of 1925-1926 was a confirmation of Planck's proposal: it was proved that the energies of a simple oscillator, like a violin string, were indeed quantized, in just the way that Planck had guessed. The essential feature of the dynamics of simple oscillations, used in obtaining this result, is that the energy required to produce any given displacement in the oscillator is proportional to the square of the displacement, as we pull a violin string farther and farther from its equilibrium position, it becomes harder and harder to produce any further displacement.

But essentially the same is true of an electromagnetic field. the energy in any one mode of oscillation of the field is proportional to the square of the field strength. in a sense, to the square of its "displacement" from the normal state of field-free empty space. Thus, by applying to the electromagnetic field the same mathematical methods that they had used for material oscillators, Born et al. were able to show that the energy of each mode of oscillation of an electromagnetic field is quantized. the allowed values are separated by a basic unit of energy, given by the frequency of the mode times Planck's constant. The physical interpretation of this result was immediate. The state of lowest energy is radiation-free empty space, and can be assigned an energy equal to zero. The next lowest state must then have an energy equal to the frequency times Planck's constant, and can be interpreted as the state of a single photon with that energy. The next state would have an energy twice as great, and therefore would be interpreted as containing two photons of the same energy. And so on. Thus, the application of quantum mechanics to the electromagnetic field had at last put Einstein's idea of the photon on a firm mathematical foundation.

 Born, Heisenberg, and Jordan had dealt only with the electromagnetic field in empty space, so although their work was illuminating, it did not lead to any important quantitative predictions. The first "practical" use of quantum field theory was made in a 1927 paper of Paul Adrian Maurice Dirac.15 Dirac was grappling with an old problem: how to calculate the rate at which atoms in excited states would emit electromagnetic radiation and drop into states of lower energy. The difficulty was not so much in deriving an answer, the correct formula had already been derived in an ad hoc sort of way by Born and Jordan and by Dirac himself. The problem was to understand this guessed-at formula as a mathematical consequence of quantum mechanics. This problem was of crucial importance, because the process of spontaneous emission of radiation is one in which "particles" are actually created. Before the event, the system consists of an excited atom, whereas after the event, it consists of an atom in a state of lower energy, plus one photon. If quantum mechanics could not deal with processes of creation and destruction, it could not be an all-embracing physical theory.

The quantum-mechanical theory of such processes can best be understood by returning to the analogy between fields and oscillators. In the absence of any interaction with atoms, the electromagnetic field is like an ensemble of completely isolated violin strings; whatever energy is given to any mode of oscillation, or equivalently, whatever the number of photons of a particular frequency, it will stay the same forever. Similarly, if an atom did not interact with radiation, it would remain indefinitely in whatever state it was placed. But atoms do interact with radiation, because electrons carry an electric charge. So the true analogy is with a set of violin strings that are weakly coupled together, as by the violin soundboard. Every musician knows what happens when one oscillator is set going, it will gradually feed energy into the other modes of oscillation until they are all excited. In quantum mechanics this cannot happen gradually because the energies are quantized, so instead the probability gradually increases that energy which was originally stored in the atom will be found in the electromagnetic field, in other words, that a photon will have been created.

Dirac's successful treatment of the spontaneous emission of radiation confirmed the universal character of quantum mechanics. However, the world was still conceived to be composed of two very different ingredients, particles and fields, which were both to be described in terms of quantum mechanics, but in very different ways. Material particles like electrons and protons were conceived to be eternal; to describe the physical state of a system, one had to de scribe the probabilities for finding each particle in any given region of space or range of velocities. On the other hand, photons were supposed to be merely a manifestation of an underlying entity, the quantized electromagnetic field, and could be freely created and destroyed.

It was not long before a way was found out of this distasteful dualism, toward a truly unified view of nature. The essential steps were taken in a 1928 paper of Jordan and Eugene Wigner,18 and then in a pair of long papers in 1929 1930 by Heisenberg and Pauli.19 (A somewhat different approach was also developed in 1929 by Enrico Fermi.20) They showed that material particles could be understood as the quanta of various fields, in just the same way that the photon is the quantum of the electromagnetic field. There was supposed to be one field for each type of elementary particle. Thus, the inhabitants of the universe were conceived to be a set of fields, an electron field, a proton field, an electromagnetic field, and particles were reduced in status to mere epiphenomena. In its essentials, this point of view has survived to the present day, and forms the central dogma of quantum field theory: the essential reality is a set of fields, subject to the rules of special relativity and quantum mechanics; all else is derived as a consequence of the quantum dynamics of these fields.

This field-theoretic approach to matter had an immediate implication: given enough energy, it ought to be possible to create material particles, just as pho tons are created when an atom loses energy. In 1932 Fermi used this aspect of quantum field theory to formulate a theory of the process of nuclear beta decay. Ever since Becquerel discovered in 1896 that a crystal containing uranium salts would fog a photographic plate, it was known that nuclei were subject to various kinds of radioactive decay. In one of these modes of decay, known as beta decay, the nucleus emits an electron, and changes its own chemical properties. Throughout the 1920s it was believed that the nuclei are composed of protons and electrons, so there was no great paradox in supposing that every once in a while one of the electrons gets out. However, in 1931 Paul Ehrenfest, and Julius Robert Oppenheimer presented a compelling though indirect argument that nuclei do not in fact contain electrons, and in 1932 Heisenberg proposed instead that nuclei consist of protons and the newly discovered neutral particles, the neutrons. The mystery was, where did the electron come from when a nucleus suffered a beta decay? Fermi's answer was that the electron comes from much the same place as the photon in the radiative decay of an excited atom, it is created in the act of decay, through an interaction of the field of the electron with the fields of the proton, the neutron, and a hypothesized particle, the neutrino.

One problem remained to be solved after 1930, in order for quantum field theory to take its modern form. In formulating the pre-field-theoretic theory of individual electrons, Dirac in 1928 had discovered that his equations had solutions corresponding to electron states of negative energy, that is, with energy less than the zero energy of empty space. In order to explain why ordinary electrons do not fall down into these negative-energy states, he was led in 1930 to propose that almost all these states are already filled. The unfilled states, or "holes" in the sea of negative energy electrons would behave like particles of positive energy, just like ordinary electrons but with opposite electrical charge: plus instead of minus. Dirac thought at first that these "antiparticles" were the protons, but their true nature as a new kind of particle was revealed with the discovery of the positron in cosmic rays in 1932.

Dirac's theory of antimatter allowed for a kind of creation and annihilation of particles even without introducing the ideas of quantum field theory. Given enough energy, a negative-energy electron can be lifted up into a positive energy state, corresponding to the creation of a positron (the hole in the negative energy sea) and an ordinary electron. And of course the reverse annihilation process can also occur. Dirac himself had always resisted the idea that quantum field theory is needed to describe any sort of particle but photons. However, in 1934 a pair of papers by Wendell Furry and Oppenheimer and by Pauli and Victor Weisskopf28 showed how quantum field theory naturally incorporates the idea of antimatter, without introducing unobserved particles of negative energy, and satisfactorily describes the creation and annihilation of particles and antiparticles. For most theorists, this settled the matter, and particles and antiparticles are now seen as coequal quanta of the various quantum fields.

It is important to understand that quantum field theory gave rise to a new view not only of particles but also of the forces among them. We can think of two charged particles interacting at a distance not by creating classical electromagnetic fields which act on one another, but by exchanging photons, which continually pass from one particle to the other. Similarly, other kinds of force can be produced by exchanging other kinds of particle. These exchanged particles are called virtual particles, and are not directly observable while they are being exchanged, because their creation as real particles (e.g., a free electron turning into a photon and an electron) would violate the law of conservation of energy. However, the quantum-mechanical uncertainty principle dictates that the energy of a system that survives for only a short time must be correspondingly highly uncertain, so these virtual particles can be created in intermediate states of physical processes, but must be reabsorbed again very quickly.

From this line of reasoning, one can infer that the force produced by the exchange of a given type of particle has a range (the distance beyond which it falls off very rapidly) inversely proportional to the mass of the exchanged particle. Thus, the photon, which has zero mass, gives rise to a force of infinite range, the familiar inverse-square force of Coulomb. The force between protons and neutrons in an atomic nucleus was known to have a range of a little less than a million millionth of a centimeter, so Hidekei Yukawa was able in 1936 to predict the existence of an entirely new kind of particle, the meson, with a mass a few hundred times that of the electron. In calculating these forces, one assumes that the energy density at a point is not just a sum of squares of fields, as for uncoupled simple mechanical oscillators, but also contains products of the values of the different fields (and their rates of change) at that point. These multifield interactions are the unknowns that have to be sought by our theoretical and experimental efforts. From the viewpoint of quantum field theory, all questions about the particles of which matter is composed and the forces that act among them are only means to an end, the real problem is to determine what are the fundamental quantum fields, and what are the interactions among them.

The Problem of Infinities

I have described the early days of quantum field theory as if it were a grand progress from triumph to triumph. This has been a somewhat distorted picture, for almost from the beginning the theory was thought to be subject to a grave internal inconsistency.

The problem first appeared in a 1930 paper of Oppenheimer, who was trying to calculate the effect on the energy of an atomic electron produced by its interaction with the quantum electromagnetic field. Just as the exchange of virtual photons between two electrons produces an energy of interaction between them, so also in quantum field theory the emission of virtual photons and their reabsorption by the same electron produces a self-energy, which might depend on the atomic orbit occupied by the electron, and which might show up as an observable shift in atomic energy levels. Unfortunately, Oppenheimer discovered that the energy shift predicted by the quantum theory of the electromagnetic field was infinite!

 The infinity here arises because when an electron in an atom turns briefly into a photon and an electron, these two particles can share the momentum of the original electron in an infinite variety of ways. The self-energy of the electron involves a sum over all the ways that the momentum can be shared out, and because there is no limit to how large the momenta can be, this sum turns out to be infinite. It was not obvious that this would have to happen; after all, there are many examples of mathematical series in which one adds up an infinite number of terms and gets a finite result. (For example, 1 + 1/2 + 1/3 + 1/4 + . . .). However, Oppenheimer found that the self-energy of the electron behaved more like the series 1 + 1 + 1 + 1 + ..., and could hardly be interpreted as a finite quantity.

The problem was ameliorated a bit a few years later, when WeisskopP1 included the effects of processes in which a virtual electron, positron, and pho ton are created out of empty space, with the positron and photon then being annihilated along with the original electron, leaving the new electron over as a real particle in the final state. This contribution to the self-energy cancelled the worst part of the original infinity found by Oppenheimer, but the self-energy was left in the form of a sum over virtual momenta which behaved like the series 1 + 1/2 + 1/3 + 1/4 + . . ., and which still could not be interpreted as a finite quantity.

 Similar infinities were found in other problems, such as the "polarization of the vacuum" by applied electric fields, and the scattering of electrons by the electric fields of atoms. (One of the few bright spots in this otherwise discouraging picture came in the treatment of infinities associated with photons of very low momenta; it was shown in 1937 by Felix Bloch and Arnold Nord sieck that these infrared infinities all cancel in the total rates for collisions.) Of course, if one uses a theory to calculate an observable quantity, and finds that the answer is infinite, one concludes either that a mathematical mistake has been made, or that the original theory was no good. Throughout the 1930s, the accepted wisdom was that quantum field theory was in fact no good, that it might be useful here and there as a stopgap, but that something radically new would have to be added in order for it to make sense.

The problem of infinities in fact has provided the single greatest impetus toward a radical revision of quantum field theory. Some of the ideas which were tried out in the 1930s and 1940s are listed below:

1. In 1938 Heisenberg proposed that there is a fundamental unit of energy, and that quantum field theory works only at scales of energy that are small compared with this energy unit. The analogy was with the other fundamental constants, Planck's constant and the speed of light. Quantum mechanics comes into play when ratios of energies and frequencies approaches values as small as Planck's constant, whereas special relativity is needed when velocities approach values as large as the speed of light. In the same way, Heisenberg supposed, some entirely new physical theory might be needed when energies exceed the fundamental unit, and some mechanism in this theory might wipe out the contributions of virtual particles with such high energies, thus avoiding the problem of infinities. (Heisenberg's idea drew support in the 1930s from the observation that the showers of charged particles produced by high energy cos mic rays did not behave as expected in quantum electrodynamics. This discrepancy was later realized to be due to the production of new particles, the mesons, and not to a failure of quantum field theory.)

2. John Archibald Wheeler in 1937 and Heisenberg in 1943 independently proposed a positivistic approach to physics, which would have replaced quantum field theory with a different sort of theory, sometimes called an "S matrix theory," which would involve only directly measurable quantities. They reasoned that experiments do not actually allow us to follow what happens to electrons in atoms or in collision processes. Instead, it is only really possible to measure the energies and a few other properties of bound systems like atoms, and the probabilities for various collision processes. These quantities obey certain very general principles, such as reality, conservation of probabilities, smooth energy dependence, conservation laws, etc., and it was these general principles that were supposed to replace the assumptions of quantum field theory.

3. Dirac in 1942 suggested that quantum mechanics ought to be expanded to include states of negative probability, which could not appear as the initial or final state of any physical process, but which would have to be included among the intermediate states in these processes. In this manner, minus signs might be introduced into the sums over the ways that the intermediate states share out the momentum of the system, so that a finite answer would be obtained, just as 1 -1/2 +1/3-1/4+ . . . is a finite quantity (the natural logarithm of 2) whereas 1 + 1/2+ 1/3 + 1/4 + . . . is infinite.

 4. As already mentioned, Richard Feynman and John Wheeler in 1945 considered the possibility of abandoning field theory altogether, replacing the field-mediated interaction among particles with a direct action at a distance.

Some of these ideas have survived and are now part of the regular equipment of theoretical physics. In particular, the idea of a pure S-matrix theory has flourished in the development of so-called "dispersion relations" which involve only observable quantities. Also, states of negative probability are now a handy mathematical device, useful especially for dealing with the polarization of virtual photons. However, none of these ideas proved to be the key to the problem of infinities.

The solution turned out to be far less revolutionary than most theorists had expected. Recall that the energy of an electron had been found by Oppenheimer, Waller, and Weisskopf to receive an infinite contribution from the emission and reabsorption of "virtual" photons. An infinite self-energy of this sort appears not only when the electron is moving in orbit in an atom, but also when it is at rest in empty space. But special relativity tells us that the energy of a particle at rest is related to its mass by the famous formula E = mc2. Thus the electron mass found in tables of physical data could not be just the "bare" mass, the quantity appearing in our equations for the electron field, but would have to be identified with the bare mass plus the infinite "self mass," produced by the interaction of the electron with its own virtual photon cloud. This suggests that the bare mass might itself be infinite, with an infinity which just cancels the infinity in the self mass, leaving a finite total mass to be identified with the mass that is actually observed. Of course, it goes against the grain to suppose that a quantity like the bare mass, which appears in our fundamental field equations could be infinite; but after all, we can never turn off the electron's virtual photon cloud to measure the bare mass, so no paradox arises. Similar remarks apply to other physical parameters, like the charge of the electron. Is it then possible that the infinities in the bare masses and bare charges cancel the infinities found in quantum field theory, not only when we calculate the total masses and charges, but in all other calculations as well? This method, of eliminating infinities by absorbing them into a redefinition of physical parameters, has come to be called renormalization.

The renormalization idea was suggested by Weisskopf in 1936, and again by Kramers in the mid 1940s. However, it was far from obvious that the idea would work. In order to eliminate infinities by absorbing them into redefinitions of physical parameters, the infinities must appear in only a special way, as corrections to the observed values of these parameters. For instance, in order to absorb the infinite shift in atomic energy levels found by Oppenheimer into a redefinition of the electron mass, the infinite part of the self-energy would have to be the same for all atomic energy levels. The mathematical methods available in the 1930s and early 1940s were simply inadequate for the task of sorting out all the infinities that might appear in all possible calculations to see if they could all be eliminated by renormalization. Perhaps even more important, there was no compelling reason to do so, there were no experimental data which forced theorists to come to grips with these problems. And of course, physicists had other things on their minds from 1939 to 1945.

 Revival at Shelter Island

On June 1, 1947, a four-day conference on the foundations of quantum mechanics opened at Shelter Island, a small island near the end of Long Island in New York State. The conference brought together young American physicists from the new generation who had started their scientific work during the war at Los Alamos and the MIT Radiation Laboratory, as well as older physicists who had been active in the 1930s. Among the younger participants was Willis Lamb, an experimentalist then working in the remarkable group of physicists founded at Columbia University by I. I. Rabi. Lamb announced the results of a beautiful experiment, in which he and a student, R. C. Retherford, had for the first time measured an effect of the self-energy of the electron in the hydrogen atom.

The existing theory of the hydrogen atom had been first advanced by Niels Bohr in 1913, then put on a sound mathematical foundation by the quantum mechanics of 1925-1926, and finally corrected to include effects of relativity and the spin of the electron by Heisenberg and Jordan, C. G. Darwin, and Dirac. In the final version of this theory, in particular as formulated by Dirac, certain pairs of excited states of the atom were expected to have exactly equal energy. (These pairs of states correspond to the two different ways that the spin of the electron and the angular momentum associated with its revolution around the nucleus could combine to give a definite total angular momentum.) But this theory ignored all effects of the interaction of the electron with its own electromagnetic field, the effects that Oppenheimer30 had tried to calculate when he discovered the infinities. If such effects were real they would presumably shift the energies of these pairs of states, so that they would no longer be exactly equal.

This is what Lamb and Retherford found. By using the new techniques of handling microwave radiation that had come out of wartime work in radar, they were able to show that the energies of the first two excited states of hydrogen , which were supposed to be equal according to the 1928 Dirac theory, actually differed by about 0.4 parts per million. This is now known as the Lamb shift.

Stimulated in part by Lamb's results, the participants at Shelter Island entered into an intense discussion of the underlying theory. I was not at Shelter Island (having just entered high school) and I cannot trace the historical development of the different reformulations of quantum field theory that developed around that time. It would be most valuable for a historian of science to gather the recollections of the participants at Shelter Island and succeeding conferences, read the papers that were written at that time, and put together a coherent account. I will sketch here only a few of the products of this period.

The Lamb shift itself was first calculated by Hans Bethe, I believe on the train ride back from Shelter Island. Using mass renormalization to eliminate infinities, he obtained a result in reasonable agreement with the value announced by Lamb. However, as acknowledged by Bethe himself, this was a rough calculation, involving approximations that were not fully consistent with the Special Theory of Relativity.

 There were at least three general reformulations of quantum field theory worked out in the late 1940s that were thoroughly relativistic and that were sufficiently simple and elegant to allow a systematic treatment of the infinities. One of these approaches had actually been developed well before the Shelter Island Conference, by Sin-Itiro Tomonaga48 and his colleagues in Japan, but I believe that their work had not yet become known in the United States in the summer of 1947. The two other approaches were contributed by participants at Shelter Island, Julian Schwinger and Richard Feynman.

Feynman's work led to a set of pictorial rules which allowed one to associate a definite numerical quantity to each picture of how the momentum and energy could flow through the intermediate states of any collision process: the probability for the process is given by the square of the sum of these individual quantities. The Feynman rules were very much more than a handy calculational algorithm, because they incorporated an essential feature of quantum field theory, the symmetry between particles and antiparticles. Each line in a Feynman diagram can represent either a particle created at one end of the line and destroyed at the other, or an antiparticle going the other way. It is this equal treatment of particles and antiparticles that ensures that the quantities calculated by Feynman diagrams are independent of the velocity of the observer, as required by the Special Theory of Relativity, at every stage of the calculation. As had been shown long before by Weisskopf,31 intermediate states involving antiparticles play a crucial role in cutting down the degree of infinity, from disasters like 1 + 2 + 3 + . . ., to something more manageable like 1 + 1/2 + 1/3 . . . . The Feynman rules automatically ensured the cancellations of the worst infinities, leaving over the more manageable infinities, which could be eliminated by renormalization.

 Before the end of 1947 Schwinger had used his approach to carry out what I believe was the first calculation of another effect of the electron's cloud of virtual photons, the anomalous magnetic moment of the electron. One of the triumphs of Dirac's 1928 theory was its prediction of the magnetic moment of the electron, a number which characterizes the strength of the electron's inter action with magnetic fields, and the strength of its own magnetic field. How ever, experiments at Columbia in 1947 had revealed that the magnetic moment of the electron is actually a little larger than the Dirac value, by 1.15 to 1.21 parts per thousand. By absorbing the infinite effects of virtual photons into a renormalization of the charge of the electron, Schwinger was able to calculate a finite magnetic moment that was larger than the Dirac value by just 1.16 parts per thousand!

Of course, both the experimental and the theoretical determinations of effects like the Lamb shift and the anomalous magnetic moment of the electron have been enormously improved since 1947.53 For instance, right now the experimental value of the magnetic moment of the electron is larger than the Dirac value by 1.15965241 parts per thousand, whereas the theory gives this anomalous magnetic moment as 1.15965234 parts per thousand, with uncertainties of about 0.00000020 and 0.00000031 parts per thousand, respectively. The precision of the agreement between theory and experiment here can only be called spectacular.

Finally, Freeman Dyson in 1949 showed that the formalisms of Schwinger and Tomonaga would yield the same graphical rules that had been found by Feynman. Dyson also carried out an analysis of the infinities in general Feynman diagrams, and sketched out a general proof that these infinities are always of precisely the sort which could be removed by renormalization.55 As a graduate student in the mid-1950s, I learned the new approach to quantum field theory by reading Dyson's marvelously lucid papers.

I should emphasize that the theory of Schwinger, Tomonaga, Feynman, and Dyson was not really a new physical theory. It was simply the old quantum field theory of Heisenberg, Pauli, Fermi, Oppenheimer, Furry, and Weisskopf, but cast in a form far more convenient for calculation, and equipped with a more realistic definition of physical parameters like masses and charges. The continued vitality of the old quantum field theory after fifteen years of attempts to find a substitute is truly impressive.

This raises an interesting historical question. All the effects that were calculated in the great days of 1947-1949 could have been estimated if not actually calculated at any time after 1934. True, without the renormalization idea, the answers would have been formally infinite, but at least it would have been possible to guess the order of magnitude of quantities like the Lamb shift and the correction to the magnetic moment of the electron. (An infinite series like 1 + 1/2 + 1/3 + 1/4 + . . . grows very slowly; after a million terms, it is still less than 14.4.) Not only was this not done, most theorists seemed to have believed that these quantities were zero! Indeed, some evidence for what was later called the Lamb shift had actually been discovered56 in 1938, but to the best of my knowledge, no theorist checked to see whether the order of magnitude of this reported energy splitting was more or less what would be expected in a quantum field theory.

Why was quantum field theory not taken more seriously? One reason is the tremendous prestige of the Dirac theory of 1928, which had worked so well in accounting for the fine structure of the hydrogen spectrum without including self-energy effects. Even more important, the appearance of infinities dis credited quantum field theory altogether in many physicists' minds. But I think that the deepest reason is a psychological difficulty, that may not have been sufficiently appreciated by historians of science. There is a huge apparent distance between the equations that theorists play with at their desks, and the practical reality of atomic spectra and collision processes. It takes a certain courage to bridge this gap, and to realize that the products of thought and mathematics may actually have something to do with the real world. Of course, when a branch of science is well under way, there is continual give and take between theory and experiment, and one gets used to the idea that the theory is about something real. Without the pressure of experimental data, the realization comes harder. The great thing accomplished by the discovery of the Lamb shift was not so much that it forced us to change our physical theories, as that it forced us to take them seriously.

Weak and Strong Interactions

 For a few years after 1949, enthusiasm for quantum field theory was at a high level. Many theorists expected that it would soon lead to an understanding of all microscopic phenomena, not only the dynamics of photons, electrons, and positrons. However, it was not long before there was another collapse in confidence, shares in quantum field theory tumbled on the physics bourse, and there began a second depression, which was to last for almost twenty years.

 Part of the problem arose from the limited applicability of the renormalization idea. In order for all infinities to be eliminated by a renormalization of physical parameters like masses and charges, it is necessary for these infinities to arise in only a limited number of ways, as corrections to masses, charges, etc., but not otherwise. Dyson's work showed that this would be the case for only a small class of quantum field theories, which are called renormalizable theories. The simplest theory of photons, electrons, and positrons (known as quantum electrodynamics) is renormalizable in this sense, but most theories are not.

Unfortunately, there was one important class of physical phenomena which apparently could not be described by a renormalizable field theory. These were the weak interactions, which cause the radioactive beta decay of nuclei mentioned above in the section on the Birth of Quantum Field Theory. Fermi had in vented a theory of weak interactions in 1932 which, with a few modifications, adequately described all weak interaction phenomena in the lowest order of approximation, that is, including only a single simple Feynman diagram in calculations of transition rates. However, as soon as this theory was pushed to the next order of approximation, it exhibited infinities which could not be removed by a redefinition of physical quantities.

The other major problem had to do with the limited validity of the approximation techniques used in 1947-1949. Any physical process is represented by an infinite sum of Feynman diagrams, each one representing a particular sequence of intermediate states consisting of definite numbers of particles of various types. To each diagram we associate a numerical quantity; the rate of the process is the square of the sum of these quantities. Now, in quantum electro dynamics the quantities associated with complicated diagrams are very small; for each additional photon line there is one additional factor of a small number known as the fine-structure constant, roughly 1/137. In the Fermi theory of weak interactions, the corresponding factor is even smaller, at the typical energies of elementary particle physics, it is 10"5 to 10~7. It is the rapid decrease of the contributions associated with complicated Feynman diagrams (together with the renormalizability of the theory) that makes it possible to carry calculations in quantum electrodynamics to such a fantastic degree of accuracy.

However, in addition to electromagnetic and weak interactions, there is one other class of interaction in elementary particle physics, known as the strong interactions. It is the strong interactions that hold atomic nuclei together against the electrostatic repulsion of the protons they contain. For strong interactions, the factor corresponding to the fine structure constant is roughly of the order of one instead of 1/137, so complicated Feynman diagrams are just as important as simple ones. (This of course is why these interactions are strong.) Thus, although attempts have been made time and time again to use quantum field theory in calculations of the nuclear force, it has never really worked in a convincingly quantitative way. New kinds of strongly interacting particles called mesons and hyperons were being discovered from 1947 on, first in cosmic rays and then in accelerator laboratories, and quantum field theory was at first enthusiastically used to study their strong interactions, but again with little quantitative success. It was not that there was any difficulty in thinking of renormalizable quantum field theories that might account for the strong interactions, it was just that having thought of such a theory, there was no way to use it to derive reliable quantitative predictions, and to test if it were true.

The nonrenormalizability of the field theory of weak interactions and the uselessness of the field theory of strong interactions led in the early 1950s to a widespread disenchantment with quantum field theory. Some theorists turned to the study of symmetry principles and conservation laws, which can be applied to physical phenomena without detailed dynamical calculations. Others picked up the old S-matrix theory of Wheeler and Heisenberg, and worked to develop principles of strong interaction physics that would involve only observable quantities. In both lines of work, quantum field theory was used heuristically, as a guide to general principles, but not as a basis for quantitative calculation.

Now, as a result of work by many physicists over the last decade, quantum field theory has again become what it was in the late 1940s, the chief tool for a detailed understanding of elementary particle processes. There are quantum field theories of the weak and strong interactions, called gauge theories, which are not subject to the old problems of nonrenormalizability and incalculability, and which are to some extent even true! We are still in the midst of this revival, and I will not try to outline its history, but will only summarize how the old problems of quantum field theory are surmounted in the new theories.

The essence of the new theories is that the weak and the strong interactions are described in a way that is almost identical to the successful older quantum field theory of electromagnetic interactions. Just as electromagnetic interactions among charged particles are produced by the exchange of photons, so the weak interactions are produced by the exchange of particles called intermediate vector bosons and the strong interactions by the exchange of other particles called gluons. All these particles, photons, intermediate vector bosons, and gluons have equal spin, and have interactions governed by certain powerful symmetry principles known as gauge symmetries. (A gauge symmetry principle states that the fundamental equations do not change their form when the fields are subjected to certain transformations, whose effect varies with position and time.) Because these theories are so similar to quantum electrodynamics, they share its fundamental property, of being renormalizable. Indeed, the relation between weak and electromagnetic interactions is not merely one of analogy, the theory unifies the two, and treats the fields of the photon and the intermediate vector bosons as members of a single family of fields.

The intermediate vector bosons are not massless, like the photon, but instead are believed to have perhaps 70 to 80 times the mass of a proton or neutron. This huge mass is not due to any essential dissimilarity between the photon and the intermediate vector boson fields, but instead arises from the way that the symmetry of the underlying field theory breaks down when the field equations are solved. The family of intermediate vector bosons, of which the photon is a member, is believed to contain one heavy charged particle and its antiparticle, called the W+ and W~, and one even heavier neutral particle, called the Z. Exchange of the W produces the familiar weak interactions, like nuclear beta decay, whereas exchange of the Z would produce a new kind of weak interaction, in which the participating particles do not change their charge. Such neutral current processes were discovered in 1973, and are found to have just about the properties expected in these theories. All of the intermediate vector bosons are much too heavy to have been produced with existing accelerator facilities, but there are great hopes of producing them with colliding beams of protons and antiprotons before too long.

In contrast, the gluons which mediate the strong interactions may well have zero mass. Such theories with massless gluons have a remarkable property known as asymptotic freedom at very high energy or very short distances, the strength of the gluon interactions gradually decreases. In consequence, it is now possible to use quantum field theory to carry out detailed calculations of strong interaction processes at sufficiently high energy. In particular, it has been possible to account for some of the features observed in a process such as high energy electron-nucleon scattering.

Just as the gluon interactions become weak at high energy and short distances, they also become strong at low energy and long distances. For this rea son, it is widely believed (though not yet proved) that particles which carry the quantity called color with which gluons interact (in the same sense that photons interact with electric charge) cannot be produced as separate free particles. The colored particles include the gluons themselves, which is presumably why gluons have never been observed as real particles. The colored particles are also believed to include the quarks discussed in this double issue o? Daedalus in the article by Sidney Drell. The observed strongly interacting particles such as neutrons, protons, and mesons are believed to be compound states, consisting of quarks, antiquarks, and gluons, but with no net color. This picture represents a nearly complete triumph of the field over the particle view of matter: the fundamental entities are the quark and gluon fields, which do not correspond to any particles that can be observed even in principle, whereas the observed strongly interacting particles are not elementary at all, but are mere consequences of an underlying quantum field theory.

There are hopes of a unified gauge theory of weak, electromagnetic, and strong interactions. The photon, intermediate vector bosons, and gluons would then form part of a single family of fields. However, in order for this to be possible, there would have to be other fields in this family, corresponding to particles of extraordinarily high mass. According to one estimate, the expected mass of these new particles would be 1017 (a hundred thousand million million) proton masses. These masses are so high that it is no longer possible to ignore the gravitational fields of these particles, as done almost everywhere else in particle physics.

Unfortunately, despite strenuous efforts which continue to the present, there still has not been found a satisfactory (e.g., renormalizable) quantum field theory of gravitation. It is ironic that gravitation, which provided the first classical field theory, has so far resisted incorporation into the general framework of quantum field theory.

Throughout this history I have put great emphasis on the condition of renormalizability, the requirement that it should be possible to eliminate all infinities in a quantum field theory by a redefinition of a small number of physical parameters. Many physicists would disagree with this emphasis, and indeed, it may eventually be found that all quantum field theories, renormalizable or not, are equally satisfactory. However, it has always seemed to me that the requirement of renormalizability has just the kind of restrictiveness that we need in a fundamental physical theory. There are very few renormalizable quantum field theories. For instance, it is possible to construct quantum field theories of electromagnetism in which the electron has any magnetic moment we like, but only one of these theories, corresponding to a magnetic moment of 1.0011596523 . . . times the Dirac value, is renormalizable. Also, as we have seen, it took a long time before it was found that there are any renormalizable theories at all of the weak interactions. We very much need a guiding principle like renormalizability to help us to pick the quantum field theory of the real world out of the infinite variety of conceivable quantum field theories. Thus, if renormalizability is ultimately to be replaced with some other condition, I would hope that it will be one that is equally or even more restrictive. After all, we do not want merely to describe the world as we find it, but to explain to the greatest possible extent why it has to be the way it is. 




https://blog.sciencenet.cn/blog-3296162-1181149.html

上一篇:澳大利亚目前已获批准与在申请转基因作物 -- 译自澳大利亚新西兰食品管理局网站
下一篇:关于石油的成因 -- 前言
收藏 IP: 101.165.73.*| 热度|

3 岳东晓 杨正瓴 黄永义

该博文允许注册用户评论 请点击登录 评论 (1 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-12-23 19:22

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部