民间科学家的个人知识分享 http://blog.sciencenet.cn/u/hongfei 民间科学家@中国 scientist@world

博文

科学中的操行与不端(转帖)

已有 7440 次阅读 2009-7-14 13:24 |个人分类:读书学艺|系统分类:生活其它

科学中的操行与不端(转帖)

2009.07.13

前些天回头去阅读1996年的The Flight from Science and Reason(《逃离科学与理性》),该书是Annals of the New York Academy of Science的第775卷。

Barnes & Noble上关于The Flight from Science and Reason一书的信息和介绍链接:http://search.barnesandnoble.com/The-Flight-from-Science-and-Reason/Paul-R-R-Gross/e/9780801856761

其中读到David Goodstein的那篇文章Conduct and Misconduct in Science(《科学中的操行与不端》)很受启发。

Goodstein教授从自己的经验出发所做的精辟讨论在多处引发了我的共鸣。我本来只想在博文中摘录几句,但最后还是忍不住诱惑把全文转帖在下面。

David Goodstein的Conduct and Misconduct in Science文章全文链接:http://www.its.caltech.edu/~dg/conduct_art.html

维基百科上对David Goodstein的介绍链接:http://en.wikipedia.org/wiki/David_Goodstein

***************************************************
Conduct and Misconduct in Science

by David Goodstein

My career in Scientific Fraud began some years ago when, as Caltech's Vice Provost, I became aware that Federal regulations would soon make it necessary for us, for the first time, ever, to have in place formal rules about what to do if the unthinkable were to happen on our own campus, the very inner sanctum of pure science. Since then it has virtually become an academic sub-specialty for me. I have given lectures, written articles [1] and taught courses about it. I have written the regulations, seen them adopted by Caltech (and copied by other universities) and, much to my dismay, seen them put into action in a high-profile case at Caltech. During that case I had the remarkable experience of seeing a skilled lawyer, with a copy of my regulations highlighted and underlined in four colors, guide us in following every word I had written, whether I meant it or not. Through all of that, I have learned a few things about conduct and misconduct in science that I would like to share with you.

Let me begin by stating right up front a few of the things I have come to believe. Outright fraud in science is a special kind of transgression, different from civil fraud. When it does occur, it is almost always found in the biomedical sciences, never in fields like physics or astronomy or geology, although other kinds of misconduct do occur in these other fields. Science is self-correcting, in the sense that a falsehood, injected into the body of scientific knowledge will eventually be discovered and rejected, but that does not protect us against fraud, because injecting falsehoods into the body of science is never the purpose of those who perpetuate fraud. That's why science needs active measures to protect itself. Unfortunately, the government has so far made a mess of trying to do that job. Part of the reason government agencies have performed so poorly in this arena is because they have mistakenly tried to obscure the important distinction between real fraud and lesser forms of scientific misconduct. I also believe that fraud and other forms of serious misconduct have been and still are quite rare in science, but there are reasons to fear that they may become less rare in the future. Finally, I believe we scientists are responsible for complicity in presenting to the public a false image of how science works, that can sometimes make normal behavior by scientists appear to be guilty. Let me try to explain what I mean by all of this.

Fraud means serious misconduct with intent to deceive. Intent to deceive is the very antithesis of ethical behavior in science. When you read a scientific paper, you are free to agree or disagree with its conclusions, but you must always be confident that you can trust its account of the procedures that were used and the results produced by those procedures. There are, to be sure, minor deceptions in virtually all scientific papers, as there are in all other aspects of human life. For example, scientific papers typically describe investigations as they logically should have been done rather than as they actually were done. False steps, blind alleys and outright mistakes are usually omitted once the results are in and the whole experiment can be seen in proper perspective. Also, the list of authors may not reveal who deserves most of the credit (or blame) for the work. Behavior of these kinds may or may not be correct or laudable, but they do not amount to fraud. Real fraud occurs only if the procedures needed to replicate the results of the work or the results themselves are in some way knowingly misrepresented.

This view of scientific fraud differs from civil fraud as described in tort law in that in civil fraud there must be a plaintiff, who brings the case to court, and who must be able to prove that the misrepresentation was believed and led to actual damages. By contrast, if a scientist makes a serious misrepresentation in a scientific presentation, knowing that it's false, or recklessly disregarding whether it is false, then virtually all scientists would agree, scientific fraud has been committed. It is unnecessary to prove that anyone in particular believed the statement or was damaged by believing it. In fact, it makes no difference at all whether the conclusions reached by the scientist are correct or not. In this stern view of scientific fraud, all that matters is whether procedures and results are reported honestly or not.

This kind of misbehavior seems to be restricted largely to the biomedical and closely related sciences. A study by Sociologist Pat Woolf [2] of some 26 cases that surfaced one way or another between 1980 and 1986 revealed that 21 came from biomedical science, two from chemistry and biochemistry, one from physiology and the other two from psychology. I don't know of any more recent studies of this kind, but one cannot help noticing that The Office of Research Integrity of the Public Health Service, which investigates misconduct in research supported by the National Institutes of Health, in other words, biomedical research, seems constantly to be embroiled in controversy, while the Inspector General of the National Science Foundation, which supports all of the sciences including biology, conducts its business in relative anonymity, unmolested by serious attention from the press [3].

There are undoubtably many reasons for this curious state of affairs. For example, many of these cases involve MD's, rather than Ph.D's who are trained to do research. To an MD, the welfare of the patient may be more important than scientific truth. In a recent case, a physician in Montreal was found to have falsified the records of participants in a large- scale breast cancer study. Asked why he did it, he said it was in order to get better medical care for his patients.[4] However, the larger number of cases arise from more self-interested motives.

In the cases of scientific fraud that I have looked at, three motives, or risk factors have always been present. In all cases, the perpetrators,

1. were under career pressure;
2. knew, or thought they knew what the answer would turn out to be if they went to all the trouble of doing the work properly, and
3. were working in a field where individual experiments are not expected to be precisely reproducible.

It is by no means true that fraud always occurs when these three factors are present; quite the opposite, they are often present and fraud is quite rare. But they do seem to be present whenever fraud occurs. Let us consider them one at a time.

Career Pressure: This is included because it is clearly a motivating factor, but it does not provide any distinctions. All scientists, at all levels from fame to obscurity are pretty much always under career pressure. On the other hand, simple monetary gain is seldom if ever a factor in scientific fraud.

Knowing the answer: If we defined scientific fraud to mean knowingly inserting an untruth into the body of scientific knowledge, it would be essentially nonexistent, and of little concern in any case because science would be self-correcting. Scientific fraud is always a transgression against the methods of science, never purposely against the body of knowledge. Perpetrators always think they know how the experiment would come out if it were done properly, and decide it is not necessary to go to all the trouble of doing it properly. The most obvious seeming counter-example to this assertion is Piltdown man, a human- skull and ape-jaw planted in a gravel pit in England around 1908. If ever a fraudulent physical artifact was planted in the scientific record, this was it. Yet it is quite possible that the perpetrator was only trying to help along what was known or thought to be the truth. Prehistoric remains had been discovered in France and Germany, and there were even rumors of findings in Africa. Surely human life could not have started in those uncivilized places. And, as it turned out, the artifact was rejected by the body of scientific knowledge. Long before modern dating methods showed it to be a hoax in 1954, growing evidence that our ancestors had ape-skulls and human-jaws made Piltdown Man an embarrassment at the fringes of anthropology.

Reproducibility: In reality, experiments are seldom repeated by others in science. When a wrong result is found out, it is almost always because new work based on the wrong result doesn't proceed as expected. Nevertheless, the belief that someone else can repeat an experiment and get the same result can be a powerful deterrent to cheating. This appears to be the chief difference between biology and the other sciences. Biological variability -- the fact that the same procedure, performed on two organisms as nearly identical as possible is not expected to give exactly the same result -- may provide some apparent cover for a biologist who is tempted to cheat. This last point, I think, explains why scientific fraud is found mainly in the biomedical area.

Federal agencies, particularly the Public Health Service (PHS) (parent of the National Institutes of Health or NIH) and the National Science Foundation (NSF), take quite a different view from that outlined above of what constitutes serious misconduct in science. The word fraud is never used at all, because fraud means intent to deceive, and the agencies don't want investigations hung up on issues of motive.

Instead they (both PHS and NSF) define serious misconduct to be, "...fabrication, falsification, plagiarism, or other practices that seriously deviate from those that are commonly accepted within the scientific community for proposing, conducting and reporting research."[5]

Controversy has swirled around this statement ever since it was first proposed in 1988, and issued as a "final rule" in 1990. No one takes issue with "fabrication, falsification (and) plagiarism," (ffp, mnemonic, "frequent flier plan" for the cognoscenti). The controversial part is the catch-all phrase "practices that seriously deviate from those commonly accepted..." To the agencies, that phrase is a needed mandate to enable them to carry out their mission as stewards of public funds. To many scientists and other observers it raises the horrifying specter of the government forcing scientists into some preconceived mold of orthodox thought. To me it seems poor public policy to create a government bureaucracy mandated to root out transgressions that are not specified in advance. The NSF Inspector General's office has argued, for example, that the catch-all phrase was needed for it to be able to deal with a Principal Investigator (government phrase for one of their grantees) who committed sexual harassment against female students during an NSF sponsored archeological dig.[6] Surely that was serious misconduct, it deviated seriously from practices commonly accepted within the scientific community, and the NSF had a duty to do something about it. Nevertheless, there are other laws governing that kind of misbehavior. It was not scientific misconduct.

Even the uncontroversial "fp" part of the final rule is different from the view of scientific fraud outlined above, and also different from the definition of research fraud given in the Caltech rules. The Federal rule includes plagiarism, and the Caltech rules, not content with plagiarism alone, specifics "...faking data, plagiarism and misappropriation of ideas." The idea here is that "faking data" includes both "fabrication and falsification," and that one can misappropriate ideas in science without committing plagiarism, a rather technical term that means copying a certain number of consecutive words or consecutive musical notes or the like, without proper attribution.

Early on, when I had just begun to think and write about this subject, one of my very distinguished Caltech colleagues came to see me, and said, in effect, "Look, son, let me explain to you how all this works. Scientific truth is the coin of the realm. Faking data means counterfeiting the coin. That's a serious crime. All this other stuff you talk about, plagiarism, authorship problems and so on, that's just a matter of who's been handling the coins. That's much less important." In light of this "coin of the realm" view, one can summarize easily everything that's been said above. Science is a special currency that needs to be protected. I have outlined a strict view, in which only counterfeiting the coin is real scientific fraud. The federal agencies see themselves instead as central banks, charged with overseeing all transactions, and catching all wrongdoers, not merely counterfeiters. Thus they wish to protect not only the integrity of the "monetary" system, but all other forms of behavior by scientists as well.

Plagiarism and misappropriation of ideas are intermediate cases, because although they do not threaten the integrity of the coin itself, they do deal with the orderly assigning of credit for scientific discovery, an important motivating factor in scientific progress. One would think that actual plagiarism would be rare in science, where the precise words used are less important than the substantive content, but it turns out that outright plagiarism does show up, especially in the frantic cut-and-paste procedures that are used in preparing grant proposals for federal funds (for currency of the real kind, not the metaphorical kind).

The Federal regulations (both PHS and NSF) call for each university to investigate any case that turns up before turning it over to the sponsoring agencies (that's why universities are required to have regulations of their own). The Caltech regulations call for a scientific investigation rather than a judicial proceeding. There is no confrontation of the accuser, cross examination of witnesses and so on. In fact, private attorneys are strongly discouraged. Nevertheless, sufficient safeguards are built in to protect both accused and accuser that the courts have many times ruled that this sort of proceeding does have the requisite degree of fairness, or, to use the inappropriate constitutional term, of "due process." In limited experience up to now, these proceedings have been more successful than those of the Federal agencies, where the Office of Research Integrity of PHS has seen virtually all of its major decisions overturned on appeal, and where the Inspector General of NSF has had little to do in this area.

Although I have said I believe fraud in science to be quite rare, the popular press, certain congressmen and some others inside and outside science sometimes seem to believe otherwise. These suspicions were given a boost recently by an article published in the Sigma Xi journal American Scientist [7] and widely noted in the press. Based on questionnaires sent to 2000 graduate students and 2000 faculty in chemistry, civil engineering, microbiology and sociology, it concluded that "...such problems are more pervasive than many insiders believe." A more thoughtful analysis, however, might lead to a different conclusion. To indicate what I mean, let us leave aside for the moment the perceived behavior of students (who, after all, may not yet have completed their ethical as well as their scientific educations), and lesser misdeeds ("keeping poor research records," "violation of government regulations") and get to the core of the matter: how often were faculty members perceived by fellow faculty members or by students to have committed "coin of the realm" type misbehavior (defined in the article as "falsifying or 'cooking' research data")? The answer is that 6% of the faculty and 8% of the students responding to the survey reported having seen direct evidence of such misbehavior once or twice (none more than once or twice, and 94% and 91% respectively none at all). What are we to make of such figures?

Needless to say, the response to the questionnaire was less than universal, and those who responded may be imperfectly representative of the whole group. Furthermore, of the 6-8% who think they saw instances of real scientific fraud, only a fraction will be sufficiently confident or sufficiently disturbed to bring forth charges to an appropriate authority. Of the charges that do come forth to an appropriate authority (this I can attest to on the basis of first-hand experience at Caltech) only a fraction (one-third or one-forth) survive the initial discreet inquiry phase and proceed to full investigation (the inquiry usually finds that the perceived misconduct was a misunderstanding that can be resolved to everyone's satisfaction, including the "whistleblower"'s). Finally, of those that go to investigation, not all will be found guilty. Thus, the correct conclusion to be drawn from the "data" (if that is what they are) is that real fraud in science is exceedingly rare. However, I think a more sensible conclusion is that this kind of study is not very valuable, and that we don't know much about the incidence of fraud in science.

Whatever the situation is now and has been in the past, it seems likely to change for the worse in the future. Throughout most of its recent history, science was constrained only by the limits of imagination and creativity of its participants. In the past couple of decades that state of affairs has changed dramatically. Science is now constrained primarily by the number of research posts, and the amount of research funds available. What had always previously been a purely intellectual competition has now become an intense competition for scarce resources. This change, which is permanent and irreversible, is likely to have an undesirable effect in the long run on ethical behavior among scientists. Instances of scientific fraud are almost sure to become more common, but so are other forms of scientific misbehavior.

For example, the institution of peer review is now in critical danger.

Peer review is used by scientific journals to decide what to publish and by granting agencies to decide what research to support. Obviously, sound decisions on what to publish and what research to support are crucially important to the proper functioning of science. Journal editors usually send manuscripts submitted to them to referees who will remain anonymous to the authors of the manuscript. Funding agencies sometimes do the same, especially for small projects, and sometimes instead assemble panels of referees to judge proposals for large projects.

Peer review is quite a good way to identify valid science. It was wonderfully well suited to an earlier era when progress in science was limited only by the number of good ideas available. Peer review is not at all well suited, however, to adjudicate an intense competition for scarce resources such as research funds or pages in prestigious journals. The reason is obvious enough. The referee, who is always among the few genuine experts in the field, has an obvious conflict of interest. It would take impossibly high ethical standards for referees to fail to use their privileged anonymity to their own advantage, but, as time goes on, more and more referees have their ethical standards eroded by receiving unfair reviews when they are authors. Thus the whole system of peer review is in peril.

Editors of scientific journals and program officers at the funding agencies have the most to gain from peer review, and they steadfastly refuse to believe that anything might be wrong with the system. Their jobs are made easier because they have never had to take responsibility for decisions. They are also never called to account for their choice of referees, who in any case always have the proper credentials. Since the referees perform a professional service, almost always without pay, the primary responsibility of the editor or program officer is to protect the referee. Thus referees are never called to account for what they write in their reviews. As a result, referees are able, with relative impunity, to delay or deny funding or publication to their rivals. When misconduct of this kind occurs, it is the referee who is guilty, but it is the editors and program officers who are responsible for propagating a corrupt system that makes misconduct almost inevitable.

This is the kind of misconduct that is, I fear, rampant in all fields of science, not only biomedical science. Recently, as part of a talk to a large audience of mostly young researchers at an extremely prestigious university, I outlined this analysis of the crisis of peer review. The moderator, a famous senior scientists, was incredulous. He asked the audience how many disagreed with my heresy. No one responded. Then he asked how many agreed. Every hand in the house went up. Many of us, in my generation, wish to believe that nothing important has changed in the way we conduct the business of doing science. We are wrong. Business as usual is no longer a real option for how we conduct the enterprise of science.

Finally, I think we scientists are guilty of promoting, or at least tolerating, a false popular image of ourselves that may be flattering but that, in the long run, leads to real difficulties when the public finds out that our behavior doesn't match that image. I like to call it The Myth of the Noble Scientist. It arises, I think out of the long-discredited Baconian view of the scientist as disinterested seeker of the truth, gathering facts with mind cleansed of prejudices and preconceptions. Thus the ideal scientist would be more honest than ordinary mortals, certainly immune to such common human failings as pride or personal ambition. When it turns out, as invariably it does, that scientists are not at all like that, the public that we have mislead may react with understandable anger or disappointment. The fact is that scientists are usually rigorously honest about the things that really matter to them, such as the accurate reporting of procedures and data. In other arenas, such as disputes over priority or credit, they tend to behave like the ordinary mortals they are. Furthermore, scientists are not disinterested truth-seekers, they are more like players in an intense, winner-take-all competition for scientific prestige, or perhaps merchants in a no-holds barred market-place of ideas. The sooner we learn to admit to those facts, and to distinguish carefully between serious scientific misconduct, and common human conduct by scientists, the better off we'll all be.

--------------------------------------------------------------------------------
1 David L. Goodstein, Engineering and Science, Winter 1991, p.11; American Scholar, V.60, p.505, Autumn 1991
2 Patricia K. Woolf, Projection on Science Fraud and Misconduct, Report on Workshop Number One, American Association for the Advancement of Science, page 37, 1988
3 The Office of the Inspector General of the National Science Foundation recounts its activities twice each year in its Semiannual Report To The Congress.
4 Reported, e.g., New York Times 4 and 12 April 1994; Time, 28 March 1994.
5 See, for example, The Office of Inspector General of the National Science Foundation, Semiannual Report To The Congress. number 9, April 1, 1993 - September 30, 1993, page 22, for a discussion of this definition.
6 January 29, 1993 Science, p. 584; Buzzelli, DE; The Definition of Misconduct in Science: A View from the NSF
7 J.P. Swazey, M.S. Anderson and K.S. Lewis, American Scientist v.81, p.542, 1993.



https://blog.sciencenet.cn/blog-176-243464.html

上一篇:密立根到底有没有学术不端?
下一篇:纪念法国国庆
收藏 IP: .*| 热度|

4 强涛 陆绮 王立 冯硕

发表评论 评论 (1 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-11-22 15:52

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部