yueliusd07017的个人博客分享 http://blog.sciencenet.cn/u/yueliusd07017

博文

[转载]论文同行评审包括编辑初审 (科技英文听力资料,英汉对照)

已有 1642 次阅读 2024-2-1 21:33 |个人分类:科技英语|系统分类:科普集锦|文章来源:转载

经典句子:

“the paper claims to be a randomized controlled trial but it isn’t” and “when you look at the graphs, it’s pretty clear there’s no effect” and “the authors draw conclusions that are totally unsupported by the data.” Reviewers mostly didn’t notice.

 “该论文声称实验没有引入主观臆想,但事实并非如此”

“根据图表中的数据,数据根本就不支持结论”

“作者得出的结论完全不被文中给出的数据支持”。

但是所有这些审稿人都没有发现。

Only later does some good Samaritan—often someone in the author’s own lab!—notice something weird and decide to investigate. That’s what happened with this this paper about dishonesty that clearly has fake data (ironic), these guys who have published dozens or even hundreds of fraudulent papers, and this debacle:

论文发表之后才有好事的人(通常是作者自己实验室里的人)注意到一些奇怪的事情并决定追查。结果发现,结果是不止这篇文章是有显然有虚假数据的不诚实论文 (讽刺的是)又发现作者还有其它几十甚至几百篇欺诈性论文——崩溃!

I don’t believe in peer review because I think it’s very distorted and as I’ve said, it’s simply a regression to the mean. I think peer review is hindering science. In fact, I think it has become a completely corrupt system.

我不相信同行评议,因为我认为它是非常扭曲的。正如我所说的,它只是回归到平庸。我认为同行评议阻碍了科学的发展。事实上,我认为它已经成为一个完全腐败的系统

This extremely bad system is worse than nothing because it fools people into thinking they’re safe when they’re not.

有这个极其糟糕的评审系统比没有评审更糟糕,因为它让人们误以为文章是可信的,而实际上它们根本不可信。

I think we had the wrong model of how science works. We treated science like it’s a weak-link problem where progress depends on the quality of our worst work. If you believe in weak-link science, you think it’s very important to stamp out untrue ideas—ideally, prevent them from being published in the first place. You don’t mind if you whack a few good ideas in the process, because it’s so important to bury the bad stuff.

我们对科学进步的模型是错误的。我们以为科学进步遵循短板理论,即去掉最糟糕的工作才能提升整体质量。如果你相信短板理论,你会认为排除不良的研究对期刊论文非常重要——最好从一开始就阻止它们被发表。你不介意在这个过程中同时挖掉了一些好的思想,因为把不好的东西埋掉很重要的。

If you’ve got weak-link worries, I totally get it. If we let people say whatever they want, they will sometimes say untrue things, and that sounds scary. But we don’t actually prevent people from saying untrue things right now; we just pretend to. In fact, right now we occasionally bless untrue things with big stickers that say “INSPECTED BY A FANCY JOURNAL,” and those stickers are very hard to get off. That’s way scarier.

如果你担心的是薄弱环节,那并没有错。如果我们让人们想说什么就说什么,他们有时会说慌,这听起来很可怕。但同行评审并没有阻止人们说慌;我们只是假装做到了没有让人说谎。事实上,我们经常会为谎言贴金 “被一本高档的杂志审查过”,而这个贴条很难被撕下来。那样才是更可怕的。

Weak-link thinking makes scientific censorship seem reasonable, but all censorship does is make old ideas harder to defeat. Remember that it used to be obviously true that the Earth is the center of the universe, and if scientific journals had existed in Copernicus’ time, geocentrist reviewers would have rejected his paper and patted themselves on the back for preventing the spread of misinformation. Eugenics used to be hot stuff in science—do you think a bunch of racists would give the green light to a paper showing that Black people are just as smart as white people? Or any paper at all by a Black author? (And if you think that’s ancient history: this dynamic is still playing out today.) We still don’t understand basic truths about the universe, and many ideas we believe today will one day be debunked. Peer review, like every form of censorship, merely slows down truth.

短板理论使科学审查看起来合理,但审查只是使旧思想更难被推翻。请记住,地球是宇宙的中心曾经是显而易见的事实。如果同行评审存在于哥白尼的时代,地球中心论的审稿人会拒绝他的论文,并且因为防止了“错误信息”的传播而拍拍自己的后背。优生学曾经是科学界的热门话题——你认为一群种族主义者会对一篇表明黑人和白人一样聪明的论文或者任何一篇黑人作者的论文开绿灯吗?(不要认为这是古老的历史,但是它今天仍在上演。)我们仍然不了解宇宙的基本真相,我们今天相信的许多错误观有一天可能会被证伪。同行评议,与任何形式的审查一样,只会减慢得到真理的速度。

音频文件:

 https://blog.sciencenet.cn/home.php?mod=attachment&id=1190376

The rise and fall of peer review .mp3

出处:

https://www.experimental-history.com/p/the-rise-and-fall-of-peer-review

The rise and fall of peer review

同行评审的前因后果

https://blog.sciencenet.cn/blog-3589443-1420141.html

英汉对照 (机器翻译)

 Hey, this is Adam. Thank you for listening. Two weeks ago I published a post that I was

 very afraid to publish because it was about the autobiography of the guy who invented

 eugenics and even though I didn't say anything, I wasn't like, I think eugenics is good because

 I don't. I was still afraid of people being mad or getting canceled or whatever because

 this is the internet. And I was so surprised that the main feedback that I got from people

 was like, dude, just tell us what this guy thought and we will come to our own conclusions

 about what's good and what's bad. You don't have to be like, just to be clear, everybody,

 don't think we should do forced sterilizations. And that's good. And yeah, I just felt like

 I really underestimated the internet and I underestimated the people who read this blog.

 So I'm sorry I did that. And I'm so happy to have you all here that, I don't know, that

 just doesn't seem like the way the internet usually works. And it was a very heartwarming

 and it gave me the inspiration to publish this post, which I think people also might

 find outrageous in various ways. But again, I shouldn't say like, oh, you might find it

 outrageous. I should just tell you about it and you can come to your own conclusions.

 So here we go. It's called the rise and fall of peer review. Why the greatest scientific

 experiment in history failed and why that's a great thing. As always one take for the

 last 60 years or so, science has been running an experiment on itself. The experimental

 design wasn't great. There was no randomization and no control group. Nobody was in charge

 exactly and nobody was really taking consistent measurements. And yet it was the most massive

 experiment ever run and it included every scientist on earth. Most of those folks didn't

 even realize they were in an experiment. Many of them, including me, weren't born when the

 experiment started. If we had noticed what was going on, maybe we would have demanded

 a basic level of scientific rigor. Maybe nobody objected because the hypothesis seems so obviously

 true. Science will be better off if we have someone check every paper and reject the ones

 that don't pass muster. They called it peer review. This was a massive change. From antiquity

 to modernity, scientists wrote letters and circulated monographs and the main barriers

 stopping them from communicating their findings were the cost of paper, postage or a printing

 press or on rare occasions, the cost of a visit from the Catholic Church. Scientific

 journals appeared in the 1700s but they operated more like magazines or newsletters and their

 processes of picking articles ranged from, we print whatever we get, to, the editor asks

 his friend what he thinks, to, the whole society votes. Sometimes journals couldn't get enough

 papers to publish so editors had to go around begging their friends to submit manuscripts

 or fill the space themselves. Scientific publishing remained a hodgepodge for centuries. Parenthetical

 here, only one of Einstein's papers was ever peer reviewed by the way and he was so surprised

 and upset that he published his paper in a different journal instead. Just a little bit

 of historical trivia. That all changed after World War II. Governments

 poured funding into research and they convened quote, peer reviewers to ensure they weren't

 wasting their money on foolish proposals. That funding turned into a deluge of papers

 and journals that previously struggled to fill their pages now struggled to pick which

 articles to print. Reviewing papers before publication, which was quite rare until the

 1960s, became much more common. Then it became universal. Now, pretty much every journal

 uses outside experts to vet papers and papers that don't please reviewers get rejected.

 You can still write to your friends about your findings but hiring committees and grant

 agencies act as if the only science that exists is the stuff published in peer reviewed journals.

 This is the grand experiment we've been running for six decades.

 The results are in. It failed. This section is called a whole lot of money for nothing.

 Peer review was a huge expensive intervention. By one estimate, scientists collectively spend

 15,000 years reviewing papers every year. It can take months or years for a paper to

 wind its way through the review system, which is a big chunk of time when people are trying

 to do things like cure cancer and stop climate change. And universities fork over millions

 for access to peer reviewed journals even though much of the research is taxpayer funded

 and none of that money goes to the authors or the reviewers. That is a whole different

 conversation. That is ridiculous.

 Huge interventions should have huge effects. If you drop 100 million on a school system,

 for instance, hopefully it will be clear in the end that you made students better off.

 If you show up a few years later and you're like, hey, so how did my 100 million dollars

 help this school system? And everybody's like, well, we're not sure it actually did

 anything. And also we're all really mad at you now. You'd be really upset and embarrassed.

 By the way, that is the story of Mark Zuckerberg dropping a hundred million dollars on the

 Newark school system. Similarly, if peer review improved science, that should be pretty obvious.

 And we should be pretty upset and embarrassed if it didn't.

 Just a nice little drink of water there. It didn't. In all sorts of different fields,

 research productivity has been flat or declining for decades and peer review doesn't seem

 to have changed that trend. New ideas are failing to displace older ones. Many peer

 reviewed findings don't replicate and most of them may be straight up false. When you

 ask scientists to rate 20th century discoveries that won Nobel prizes, they say the ones that

 came out before peer review are just as good or even better than the ones that came out

 afterward. In fact, you can't even ask them to rate the Nobel prize winning discoveries

 from the 1990s and the 2000s because pretty much nothing from that period has won a Nobel

 prize.

 Of course, a lot of other stuff has changed since World War II. We did a terrible job

 running this experiment, so it's all confounded. All we can say from these big trends is that

 we have no idea whether peer review helped, it might have hurt, it cost a ton, and the

 current state of the scientific literature is pretty abysmal. In this biz, we call this

 a total flop. This section is called postmortem. What went wrong?

 Here's a simple question. Does peer review actually do the thing it's supposed to do?

 Does it catch bad research and prevent it from being published? It doesn't. Scientists

 have run studies where they deliberately add errors to papers, send them out to reviewers,

 and simply count how many errors the reviewers catch. Reviewers are pretty awful at this.

 In this study, reviewers caught 30% of the major flaws, in this study they caught 25%,

 and in this study they caught 29%. These were critical issues, like the paper claims to

 be a randomized controlled trial, but it isn't. And when you look at the graphs, it's pretty

 clear there's no effect, and the authors draw conclusions that are totally unsupported

 by the data. Reviewers mostly didn't notice.

 In fact, we've got knocked down real-world data that peer review doesn't work. Fraudulent

 papers get published all the time. If reviewers were doing their job, we'd hear lots of stories

 like, Professor Cornelius von Fraud was fired today after trying to submit a fake paper

 to a scientific journal. But we never hear stories like that. Instead, pretty much every

 story about fraud begins with the paper passing peer review and being published. Only later

 does some good Samaritan, often someone in the author's own lab, notice something weird

 and decide to investigate. That's what happened with this particular paper about dishonesty

 that clearly has fake data. Ironic, you can click through and read this story, it's so

 bad. It seems like the real data from that paper is in one font in Excel and the fake

 data is in another font. It's bad. Then there's these guys who have published dozens or even

 hundreds of fraudulent papers. That's the Retraction Watch leaderboard of people who

 have published so many fraudulent papers that have gotten retracted. And then there's this

 debacle, and I have a tweet here that just says, wait a second, these are not real error

 bars. The author literally just put the letter T above the bar graphs. And indeed, it is

 very clearly the letter T that they've just put on top of the bar graphs instead of error

 bars. Why don't reviewers catch basic errors and blatant fraud? One reason is that they

 almost never look at the data behind the papers they review, which is exactly where the errors

 and fraud are most likely to be. In fact, most journals don't require you to make your

 data public at all. You're supposed to provide them quote on request, but most people don't.

 And we know this because teams of people have tried to request the data and code and they

 only get it about 25% of the time. That's how we've ended up in sitcom-esque situations

 like 20% of genetics papers having totally useless data because Excel auto corrected

 the names of genes into months and years. So if you open up the data, it would just

 be a list of months and years instead of the genes that you were supposed to see. Another

 parenthetical here. When one editor started asking authors to add their raw data after

 they submitted a paper to his journal, half of them declined and retracted their submissions.

 This suggests, in the editor's words, a possibility that the raw data did not exist from the beginning.

 The invention of peer review may have even encouraged bad research. If you try to publish

 a paper showing that, say, watching puppy videos makes people donate more to charity

 and reviewer two says, I will only be impressed if this works for cat videos as well. You

 are under extreme pressure to make a cat video study work. Maybe you fudge the numbers a

 bit or toss out a few outliers or test a bunch of cat videos until you find one that works

 and then you never mention the ones that didn't. Do a little fraud, get a paper published,

 get down tonight. This section is called peer review. We hardly took you seriously. Here's

 another way that we can test whether peer review worked. Did it actually earn scientists

 trust? Scientists often say they take peer review very seriously. But people say lots

 of things they don't mean, like, it's great to e-meet you, and I'll never leave you at

 them. If you look at what scientists actually do, it's clear they don't think peer review

 really matters. First, if scientists cared a lot about peer review, when their papers

 got reviewed and rejected, they would listen to the feedback, do more experiments, rewrite

 the paper, etc. Instead, they usually just submit the same paper to another journal.

 This was one of the first things I learned as a young psychologist when my undergrad

 advisor explained there is a quote, big stochastic element in publishing. Translation, it's random,

 dude. If the first journal didn't work out, we'd try the next one. Publishing is like

 winning the lottery, she told me, and the way to win is to keep stuffing the box with

 tickets. When very serious and successful scientists proclaim that your supposed system

 of scientific fact checking is no better than chance, that's pretty dismal. Second, once

 a paper gets published, we shred the reviews. A few journals publish reviews, most don't.

 Nobody cares to find out what the reviewers said or how the authors edited their paper

 in response, which suggests that nobody thinks the reviews actually mattered in the first

 place. And third, scientists take unreviewed work seriously without thinking twice. We

 read preprints and working papers and blog posts, none of which have been published in

 peer-reviewed journals. We use data from Pew and Gallup and the government, also unreviewed.

 We go to conferences where people give talks about unvetted projects, and we do not turn

 to each other and say, so interesting, I can't wait for it to be peer-reviewed so I can find

 out if it's true. Instead, scientists tacitly agree that peer review adds nothing, and they

 make up their minds about scientific work by looking at the methods and results. Sometimes

 people say the quiet part loud, like Nobel laureate Sidney Brenner, quote, sorry, he's

 British, quote, I don't believe in peer review because I think it's very distorted, and as

 I've said, it's simply a regression to the mean. I think peer review is hindering science.

 In fact, I think it has become a completely corrupt system, end quote. This section is

 called Can We Fix It? No We Can't. I used to think about all the ways we could improve

 peer review. Reviewers should look at the data. Journalists should make sure that papers

 aren't fraudulent. It's easy to imagine how things could be better. My friend Ethan and

 I wrote a whole paper on it, but that doesn't mean it's easy to make things better. My complaints

 about peer review were a bit like looking at the 35,000 Americans who die in car crashes

 every year and saying, people shouldn't crash their cars so much. Okay, but how? Lack of

 effort isn't the problem. Remember that our current system requires 15,000 years of labor

 every year, and it still does a really crappy job. Paying peer reviewers doesn't seem to

 make them any better. Neither does training them. Those are all links to studies you can

 read if you want. Maybe we can fix some things on the margins, but remember right now we're

 publishing papers that use capital T's instead of error bars, so we've got a long, long way

 to go. What if we made peer review way stricter? That might sound great, but it would make

 a lot of other problems with peer review way worse. For example, you used to be able to

 write a scientific paper with style. Now, in order to please reviewers, you have to

 write it like a legal contract. Papers used to begin like, help, a mysterious number is

 persecuting me. And now they begin like, humans have said at various times and places, humans

 have been said at various times and places to exist, and to even have several qualities,

 or dimensions, or things that are true about them, but of course this needs further study.

 Smurgdorf and Blugensnaut, 1978, StickyWigit, 2002, VonFrodd, et al., 2018b.

 This blows. And as a result, nobody actually reads these papers. Some of them are like

 100 pages long with another 200 pages of supplemental information, and all of it is written like

 it hates you and wants you to stop reading immediately. Recently, a friend asked me when

 I last read a paper from beginning to end. I couldn't remember, and neither could he.

 When someone tells me they loved my paper, he said, I say thank you, even though I know

 they didn't read it. Stricter peer review would mean even more boring papers, which

 means even fewer people would read them. Making peer review harsher would also exacerbate

 the worst problem at all. Just knowing that your ideas won't count for anything unless

 peer reviewers like them makes you worse at thinking. It's like being a teenager again.

 Before you do anything, you ask yourself, but will people think I'm cool? When getting

 and keeping a job depends on producing popular ideas, you can get very good at thought-policing

 yourself into never entertaining anything weird or unpopular at all. That means we end

 up with fewer revolutionary ideas, and unless you think everything's pretty much perfect

 right now, we need revolutionary ideas real bad.

 But on the off chance you do figure out a way to improve peer review without also making

 it worse, you can try convincing the nearly 30,000 scientific journals in existence to

 apply your magical method to the 4.7 million articles they publish every year. Good luck.

 This section is called Peer Review is Worse Than Nothing or Why It Ain't Enough to Sniff

 the Beef. Peer review doesn't work, and there's probably no way to fix it. But a little bit

 of vetting is better than none at all, right? I say, no way. Imagine you discover that the

 Food and Drug Administration's method of inspecting beef is just sending some guy, let's call

 him Gary, around to sniff the beef and say whether it smells okay or not, and the beef

 that passes the sniff test gets a sticker that says, inspected by the FDA. You'd be

 pretty angry. Yes, Gary may find a few batches of bad beef, but obviously he's going to

 miss most of the dangerous meat. This extremely bad system is worse than nothing, because

 it fools people into thinking they're safe when they're not. That's what our current

 system of peer review does, and it's dangerous. That debunked theory about vaccines causing

 autism comes from a peer-reviewed paper in one of the most prestigious journals in the

 world, and it stayed there for 12 years before it was retracted. How many kids haven't gotten

 their shots because one rotten paper made it through peer review and got stamped with

 a scientific seal of approval? If you want to sell a bottle of vitamin C pills in America,

 you have to include a disclaimer that says none of the claims on the bottle have been

 evaluated by the Food and Drug Administration. Maybe journals should stamp a similar statement

 on every paper. Nobody has really checked whether this paper is true or not. It might

 be made up for all we know. That would at least give people the appropriate level of

 confidence. This section is called Science Must Be Free.

 Why did peer review seem so reasonable in the first place? I think we had the wrong

 model of how science works. We treated science like it's a weak link problem where progress

 depends on the quality of our worst work. If you believe in weak link science, you think

 it's very important to stamp out untrue ideas, ideally prevent them from being published

 in the first place. You don't mind if you whack a few good ideas in the process because

 it's so important to bury the bad stuff. But science is a strong link problem. Progress

 depends on the quality of our best work. Better ideas don't always triumph immediately, but

 they do triumph eventually because they're more useful. You can't land on the moon using

 Aristotle's physics, you can't turn mud into frogs using spontaneous generation, and

 you can't build bombs out of phlogiston. Phlogiston? Phlogiston? Phlogiston. I was told by someone

 who knows someone who knows someone who knows that it's phlogiston, even though it really

 seems like it should be phlogiston. Anyway, Newton's laws of physics stuck around. His

 recipe for the philosopher's stone didn't. We don't need a scientific establishment

 to smother the wrong ideas. We needed it to let new ideas challenge old ones, and time

 did the rest. If you've got weak link worries, I totally get it. If we let people say whatever

 they want, they will sometimes say untrue things, and that sounds scary. But we don't

 actually prevent people from saying untrue things right now. We just pretend to. In fact,

 right now we occasionally bless untrue things with big stickers that say, inspected by a

 fancy journal, and those stickers are very hard to get off. That's way scarier. Weak

 link thinking makes scientific censorship seem reasonable, but all censorship does is

 make old ideas harder to defeat. Remember that it used to be obviously true that the

 Earth is the center of the universe, and if scientific journals had existed in Copernicus's

 time, geocentrist reviewers would have rejected his paper and patted themselves on the back

 for preventing the spread of misinformation. Eugenics used to be hot stuff in science.

 Do you think a bunch of racists would give the green light to a paper showing that black

 people are just as smart as white people? Or any paper at all by a black author? And

 if you think that's ancient history, this dynamic is still playing out today. That's

 a link to a preprint about an ongoing controversy in my field. We still don't understand basic

 truths about the universe, and many ideas we believe today will one day be debunked.

 Peer review, like every form of censorship, merely slows down truth. This section is called

 Hooray We Failed. Nobody was in charge of our peer review experiment, which means nobody

 has the responsibility of saying when it's over. Seeing no one else, I guess I'll do

 it. We're done, everybody. Champagne all around. Great work, and congratulations. We

 tried peer review, and it didn't work. Honestly, I am so relieved. That system sucked. Waiting

 months just to hear that an editor didn't think your paper deserved to be reviewed,

 reading long walls of text from our viewers who for some reason thought your paper was

 the source of all evil in the universe, spending a whole day emailing a journal begging them

 to let you use the word years instead of always abbreviating it to just a letter Y for no

 reason. That literally happened to me. We never have to do any of that ever again.

 I know we might all be a little disappointed we wasted so much time, but there's no shame

 in a failed experiment. Yes, we should have taken peer review for a test run before we

 made it universal. But that's okay. It seemed like a good idea at the time, and now we know

 it wasn't. That's science. It will always be important for scientists to comment on

 each other's ideas, of course. It's just this particular way of doing it that didn't

 work.

 What should we do now? Well, last month I published a paper, by which I mean I uploaded

 a PDF to the internet. Every scientific paper is just a PDF on the internet. I wrote it

 in normal language so anyone could understand it. I held nothing back. I even admitted that

 I forgot why I ran one of the studies. I put jokes in it because nobody could tell me not

 to. I uploaded all the materials, data, and code where everybody could see them. I figured

 I'd look like a total dummy and nobody would pay any attention, but at least I was having

 fun and doing what I thought was right.

 Then, before I even told anyone about the paper, thousands of people found it, commented

 on it, and retweeted it. And I have some tweets there. People said nice things. I'm not going

 to read them self-aggrandizingly to you, but they're there. Total strangers emailed me

 thoughtful reviews. Tenured professors sent me ideas. NPR asked for an interview. The

 paper now has more views than the last peer-reviewed paper I published, which was in the prestigious

 Proceedings of the National Academy of Sciences.

 And now I have a hunch that far more people read this new paper all the way to the end,

 because the final few paragraphs got a lot of comments in particular. So I don't know,

 I guess that seems like a good way of doing it.

 I don't know what the future of science looks like. Maybe we'll make interactive papers

 in the metaverse, or we'll download data sets into our heads, or whisper our findings to

 each other on the dance floor of techno raves. Whatever it is, it'll be a lot better than

 what we've been doing for the past 60 years. And to get there, all we have to do is what

 we do best. Experiment.

 Thank you for listening. If you like this, if you like anything on the blog, truly the

 way you could help me the most is just sharing it with someone else. So thank you, as always.

 You can always find me in the comments, send me an email, and if you shout loud enough,

 I will hear. I'll see you in two weeks.

=============

嘿,这是亚当。感谢您的聆听。两周前我发表了一篇文章

非常害怕出版,因为这是关于发明者的自传

优生学,尽管我什么也没说,但我不喜欢,我认为优生学很好,因为

我不。我仍然害怕人们生气或被取消或其他什么因为

这就是互联网。我很惊讶我从人们那里得到的主要反馈

就像,伙计,只要告诉我们这个人的想法,我们就会得出自己的结论

关于什么是好的,什么是坏的。大家不必说得清楚,

不认为我们应该强制绝育。这很好。是的,我只是觉得

我真的低估了互联网,也低估了阅读这个博客的人。

所以我很抱歉我这么做了。我很高兴你们都在这里,我不知道,

只是看起来不像互联网通常的运作方式。这是一个非常温馨的

这给了我发表这篇文章的灵感,我认为人们也可能会这样做

以各种方式发现令人震惊的事情。但再说一次,我不应该说,哦,你可能会找到它

太离谱了。我应该告诉你这件事,你可以得出自己的结论。

那么我们开始吧。这就是同行评审的兴衰。为什么最伟大的科学

历史上的实验失败了,为什么这是一件伟大的事情。一如往常,人们对

过去六十年左右,科学一直在进行自己的实验。实验的

设计不是很好。没有随机化,也没有对照组。没有人负责

准确地说,没有人真正进行一致的测量。但它却是规模最大的

曾经进行过的实验,其中包括地球上的每一位科学家。大多数人都没有

甚至意识到他们正在做一个实验。他们中的许多人,包括我在内,都还没有出生。

实验开始了。如果我们注意到发生了什么,也许我们会要求

基本的科学严谨性。也许没有人反对,因为这个假设看起来很明显

真的。如果我们有人检查每篇论文并拒绝那些,科学将会变得更好

那些不合格的。他们称之为同行评审。这是一个巨大的变化。自古以来

科学家们写信、传播专着以及实现现代性的主要障碍

阻止他们交流他们的发现的是纸张、邮资或印刷的成本

媒体或在极少数情况下,天主教会访问的费用。科学的

期刊出现于 1700 年代,但它们的运作方式更像是杂志或时事通讯,而且它们的

挑选文章的过程包括,我们打印我们得到的任何内容,到编辑要求

他的朋友他所想的,全社会来投票。有时期刊无法获得足够的信息

论文要发表,编辑们不得不四处求朋友投稿

或者自己填充空间。几个世纪以来,科学出版一直是一个大杂烩。括号内

顺便说一句,爱因斯坦的论文中只有一篇经过同行评审,他感到非常惊讶

并对他在另一本期刊上发表论文感到不安。只要一点点

历史琐事。第二次世界大战后,这一切都改变了。政府

投入资金进行研究,并召集同行评审员以确保他们不会

把钱浪费在愚蠢的建议上。这笔资金变成了海量的论文

以前很难填满版面的期刊现在很难选择哪些

要打印的文章。在发表前对论文进行审阅,这在

20 世纪 60 年代,变得更加普遍。然后它变得普遍。现在,几乎所有期刊

使用外部专家来审查论文,不让审稿人​​满意的论文会被拒绝。

您仍然可以写信给您的朋友讲述您的发现,但招聘委员会和拨款

各机构的行为就好像唯一存在的科学就是在同行评审期刊上发表的内容。

这是我们进行了六十年的伟大实验。

结果出来了。失败了。这一段堪称白花钱了。

同行评审是一项巨大且昂贵的干预措施。据一项估计,科学家们总共花费了

每年审查论文 15,000 年。一篇论文可能需要数月或数年的时间

蜿蜒通过审查系统,这是人们尝试的大部分时间

做一些事情,比如治愈癌症和阻止气候变化。大学拨款数百万美元

即使大部分研究是纳税人资助的,也可以访问同行评审的期刊

这些钱都不会流向作者或审稿人。那是完全不同的

对话。这太荒谬了。

大规模的干预应该会产生巨大的效果。如果你在学校系统上投入一亿美元

例如,希望最终你会清楚地看到你让学生过得更好。

如果几年后你出现,你会说,嘿,那我的 1 亿美元是怎么来的?

帮助这个学校系统?每个人都说,好吧,我们不确定它是否真的如此

任何事物。而且我们现在都对你很生气。你真的会感到沮丧和尴尬。

顺便说一句,这就是马克·扎克伯格斥资一亿美元购买的故事

纽瓦克学校系统。同样,如果同行评审改善了科学,那应该是相当明显的。

如果没有,我们应该感到非常沮丧和尴尬。

那里只是喝一点水。事实并非如此。在各个不同的领域,

几十年来研究生产力一直持平或下降,同行评审似乎也没有

改变了这种趋势。新的想法无法取代旧的想法。许多同行

审查的结果无法重复,而且其中大多数可能是完全错误的。当你

当要求科学家评价 20 世纪获得诺贝尔奖的发现时,他们说

在同行评审之前发布的产品与发布的产品一样好甚至更好

之后。事实上,你甚至不能要求他们评价获得诺贝尔奖的发现

从 20 世纪 90 年代到 2000 年代,因为那个时期几乎没有任何东西赢得过诺贝尔奖

 奖。

当然,自第二次世界大战以来,许多其他事情也发生了变化。我们做得很糟糕

进行这个实验,所以一切都很混乱。从这些大趋势我们只能说

我们不知道同行评审是否有帮助,也可能有伤害,它花费了很多钱,而且

目前的科学文献状况非常糟糕。在这个行业,我们称之为

完全失败了。这部分称为事后分析。什么地方出了错?

这是一个简单的问题。同行评审真的做了它应该做的事情吗?

它是否会发现不良研究并阻止其发表?事实并非如此。科学家们

进行过研究,他们故意在论文中添加错误,然后将其发送给审稿人,

并简单地计算审稿人发现了多少错误。审稿人对此非常糟糕。

在这项研究中,审稿人发现了 30% 的主要缺陷,在这项研究中,他们发现了 25%,

在这项研究中,他们捕获了 29%。这些都是关键问题,就像论文声称的那样

是一项随机对照试验,但事实并非如此。当你看图表时,它很漂亮

显然没有效果,作者得出的结论完全没有支持

通过数据。评论家大多没有注意到。

事实上,我们已经删除了同行评审不起作用的真实数据。欺诈性的

论文一直在发表。如果审稿人尽职尽责,我们会听到很多故事

比如,科尼利厄斯·冯·弗洛德教授今天因试图提交虚假论文而被解雇

到科学期刊。但我们从来没有听过这样的故事。相反,几乎每个

关于欺诈的故事始于论文通过同行评审并发表。只是后来

一些好心人,通常是作者自己实验室的人,注意到了一些奇怪的事情

并决定进行调查。这就是这篇关于不诚实的特殊论文所发生的事情

显然有虚假数据。讽刺的是,你可以点击阅读这个故事,它是如此

坏的。看起来该论文中的真实数据在 Excel 中采用一种字体,而假数据

数据采用另一种字体。这不好。还有一些人已经发表了数十篇甚至

数百份欺诈论文。这是 Retraction Watch 排行榜上的人

发表了许多被撤回的欺诈论文。然后就是这个

崩溃了,我这里有一条推文只是说,等一下,这些不是真正的错误

酒吧。作者实际上只是将字母 T 放在条形图上方。确实,它是

非常清楚他们刚刚放在条形图顶部的字母 T 而不是错误

酒吧。为什么审稿人没有发现基本的错误和公然的欺诈行为?原因之一是他们

几乎从不看他们审阅的论文背后的数据,这正是错误所在

欺诈是最有可能发生的。事实上,大多数期刊并不要求你

数据完全公开。您应该根据要求向他们提供报价,但大多数人不会。

我们知道这一点,因为人们团队试图请求数据和代码,而他们

只有大约 25% 的时间能得到它。这就是我们如何陷入情景喜剧式的境地

就像 20% 的遗传学论文有完全无用的数据,因为 Excel 自动更正

将基因名称分为月份和年份。所以如果你开放数据的话

是月份和年份的列表,而不是您应该看到的基因。其他

这里加括号。当一位编辑开始要求作者在之后添加他们的原始数据时

他们向他的期刊提交了一篇论文,其中一半拒绝并撤回了提交的论文。

用编者的话说,这表明原始数据从一开始就不存在的可能性。

同行评审的发明甚至可能鼓励了不良研究。如果您尝试发布

一篇论文表明,观看小狗视频可以让人们向慈善机构捐款更多

审稿人二说,如果这也适用于猫视频,我只会印象深刻。你

承受着使猫视频研究成功的巨大压力。也许你捏造了数字

删除或扔掉一些异常值或测试一堆猫视频,直到找到一个有效的

然后你永远不会提及那些没有提及的。做点小作弊,发表论文,

今晚就下来吧。这部分称为同行评审。我们几乎没有认真对待过你。这是

我们可以测试同行评审是否有效的另一种方法。它真的赢得了科学家的青睐吗

相信?科学家经常说他们非常重视同行评审。但人们说很多

一些他们无意的事情,比如,很高兴见到你,我永远不会把你留在

他们。如果你看看科学家实际上做了什么,很明显他们不认为同行评审

真的很重要。首先,如果科学家非常关心同行评审,当他们的论文

被审查和拒绝,他们会听取反馈,做更多实验,重写

相反,他们通常只是将同一篇论文提交给另一本期刊。

这是我作为一名年轻心理学家在读本科时学到的第一件事之一

顾问解释说,出版中有一个很大的随机因素。翻译是随机的

老兄。如果第一个期刊不成功,我们会尝试下一个。出版就像

她告诉我,中彩票,而中奖的方法就是不断地往盒子里塞满

门票。当非常严肃和成功的科学家宣称你所谓的系统

科学事实检验并不比机会更好,这是相当令人沮丧的。第二次、一次

一篇论文发表后,我们会粉碎评论。少数期刊发表评论,大多数则不发表。

没有人关心审稿人说了什么或者作者如何编辑他们的论文

作为回应,这表明没有人认为评论实际上很重要

地方。第三,科学家们会毫不犹豫地认真对待未经审查的工作。我们

阅读预印本、工作论文和博客文章,这些文章都没有发表在

同行评审期刊。我们使用来自皮尤和盖洛普以及政府的数据,这些数据也未经审查。

我们参加会议,人们在会上谈论未经审查的项目,但我们不会转向

互相说,太有趣了,我迫不及待地等待同行评审,这样我就可以找到

如果这是真的。相反,科学家们默认同行评审不会增加任何东西,而且他们

通过查看方法和结果来决定科学工作。有时

人们大声说出安静的部分,就像诺贝尔奖得主西德尼·布伦纳(Sidney Brenner)引用的那样,对不起,他是

英国人,引用,我不相信同行评审,因为我认为它非常扭曲,并且

我说过,这只是均值回归。我认为同行评审正在阻碍科学。

事实上,我认为它已经成为一个完全腐败的系统,结束引用。本节是

称为我们可以修复它吗?不,我们不能。我曾经想过我们可以改进的所有方法

同行评审。审稿人应该查看数据。记者应确保报纸

不存在欺诈行为。很容易想象事情会如何变得更好。我的朋友伊森和

我为此写了整篇论文,但这并不意味着让事情变得更好是很容易的。我的投诉

关于同行评审有点像观察 35,000 名死于车祸的美国人

每年都会说,人们不应该这么频繁地撞车。好吧,但是怎么办呢?缺乏

努力不是问题。请记住,我们当前的系统需要 15,000 年的劳动

年复一年,但它的表现仍然很糟糕。付钱给同行评审员似乎并不

让他们变得更好。培训他们也没有。这些都是您可以进行的研究的链接

如果你愿意的话,请阅读。也许我们可以解决一些边缘问题,但请记住,现在我们正在

发表使用大写 T 而不是误差线的论文,所以我们还有很长很长的路要走

去。如果我们让同行评审更加严格怎么办?这听起来不错,但它会让

同行评审的许多其他问题更糟。例如,您以前能够

写一篇有风格的科学论文。现在,为了取悦审稿人,你必须

像法律合同一样写它。论文的开头是这样的:救命,一个神秘的数字是

迫害我。现在他们开始了,人类在不同的时间和地点说过,人类

据说在不同的时间和地点都存在,甚至具有多种品质,

或尺寸,或关于它们的真实情况,但这当然需要进一步研究。

Smurgdorf 和 Blugensnaut,1978 年;StickyWigit,2002 年;VonFrodd 等人,2018b。

这吹啊结果,没有人真正阅读这些论文。其中一些就像

100页长,另外200页补充信息,所有内容都写成这样

它讨厌你并希望你立即停止阅读。最近有朋友问我什么时候

我上次从头到尾读了一篇论文。我不记得了,他也不记得了。

当有人告诉我他们喜欢我的论文时,他说,我说谢谢你,尽管我知道

他们没有读过。更严格的同行评审意味着更无聊的论文,这

意味着更少的人会读它们。使同行评审变得更加严厉也会加剧

最糟糕的问题。仅仅知道你的想法没有任何意义,除非

像他们这样的同行评审会让你的思考能力变得更差。仿佛又回到了少年时代。

在你做任何事之前,你会问自己,人们会认为我很酷吗?当得到

保住工作取决于产生流行的想法,你可以非常擅长思想监管

永远不要招待任何奇怪或不受欢迎的事情。这意味着我们结束了

很少有革命性的想法,除非你认为一切都非常完美

现在,我们非常需要革命性的想法。

但万一你确实找到了一种改进同行评审的方法,而又不需要做任何事情

更糟糕的是,您可以尝试说服现有的近 30,000 种科学期刊

将您的神奇方法应用到他们每年发表的 470 万篇文章中。祝你好运。

本节称为同行评审比没有更糟糕或为什么它不足以嗅探

牛肉。同行评审不起作用,而且可能没有办法解决它。但有一点点

有审查​​总比没有好,对吧?我说,没办法。想象一下,您发现

食药监局检查牛肉的办法就是派人去,我们打电话

加里(Gary),在周围闻牛肉并说它闻起来是否好,还有牛肉

通过嗅探测试的产品会贴有标签,上面写着“已通过 FDA 检查”。你会是

很生气。是的,加里可能会发现几批坏牛肉,但显然他会

错过了大部分危险的肉。这个极其糟糕的制度比没有更糟糕,因为

它愚弄人们,让他们以为自己很安全,但实际上并不安全。这就是我们现在的

同行评审制度确实如此,而且很危险。这个关于疫苗导致的理论被揭穿了

自闭症来自世界上最负盛名的期刊之一的同行评审论文

世界,它在那里呆了 12 年才被撤回。有多少孩子还没领到

他们的镜头是因为一篇烂论文通过了同行评审并被盖上了印记

科学的批准印章?如果你想在美国卖一瓶维生素C药片,

你必须附上一份免责声明,说明瓶子上的任何声明均未被

经食品药品监督管理局评估。也许期刊应该贴上类似的声明

在每张纸上。没有人真正检查过这篇论文是否真实。它可能

弥补我们所知道的一切。这至少会给人们适当的水平

信心。这一部分称为“科学必须免费”。

为什么同行评审一开始看起来如此合理?我认为我们错了

科学如何运作的模型。我们对待科学就像对待一个薄弱环节问题,进步

取决于我们最差工作的质量。如果您相信薄弱环节科学,您会认为

消除不真实的想法非常重要,最好防止它们被发表

首先。你不介意在这个过程中提出一些好主意,因为

埋葬坏事非常重要。但科学是一个强联系问题。进步

取决于我们最好的工作质量。更好的想法并不总是立即获胜,但是

他们最终取得了胜利,因为他们更有用。你不能使用以下方式登陆月球

亚里士多德的物理学,你不能利用自发生成将泥土变成青蛙,并且

你不能用燃素制造炸弹。燃素?燃素?燃素。有人告诉我

谁认识谁谁认识谁知道它是燃素,即使它确实

看来应该是燃素。不管怎样,牛顿物理定律仍然存在。他的

贤者之石的配方没有。我们不需要科学机构

扼杀错误的想法。我们需要它让新想法挑战旧想法,而时间

剩下的都做了。如果您有薄弱环节的担忧,我完全理解。如果我们让人们随便说

他们想要,有时会说不真实的话,这听起来很可怕。但我们不

实际上阻​​止人们现在说不真实的话。我们只是假装这么做。实际上,

现在,我们偶尔会用大贴纸来祝福不真实的事情,上面写着经过检查

精美的日记本,那些贴纸很难撕下来。那就更可怕了。虚弱的

链接思维使科学审查制度看起来合理,但审查制度所做的一切都是

让旧观念更难被击败。请记住,过去显然是这样的

地球是宇宙的中心,如果哥白尼时代就存在科学期刊的话

那时,地心论的审稿人会拒绝他的论文并拍拍自己的背

以防止错误信息的传播。优生学曾经是科学界的热门话题。

你认为一群种族主义者会给一份显示黑人的论文开绿灯吗?

人们和白人一样聪明吗?或者有任何黑人作者的论文吗?和

如果你认为那是古老的历史,那么这种动态今天仍然在上演。那是

关于我的领域中持续存在的争议的预印本链接。基本的我们还是不懂

关于宇宙的真理,以及我们今天相信的许多想法有一天会被揭穿。

与其他形式的审查制度一样,同行评审只会减慢真相的速度。这部分称为

万岁,我们失败了。没有人负责我们的同行评审实验,这意味着没有人

有责任在结束时说出来。见不到别人,我想我会这么做

它。我们已经完成了,大家。周围都是香槟。干得好,恭喜。我们

尝试过同行评审,但没有成功。老实说,我很放心。那个系统很糟糕。等待

几个月后才听到编辑认为你的论文不值得评审,

阅读来自观众的长长的文字墙,他们出于某种原因认为您的论文是

宇宙万恶之源,花了一整天的时间通过电子邮件向日记恳求他们

让您使用“years”一词,而不是总是将其缩写为字母“Y”(表示“no”)

原因。这确实发生在我身上。我们再也不需要做任何这样的事情了。

我知道我们可能都会有点失望,因为我们浪费了这么多时间,但没有什么可耻的

在一次失败的实验中。是的,我们应该在进行测试之前接受同行评审

使其变得普遍。但没关系。当时这似乎是个好主意,现在我们知道了

事实并非如此。这就是科学。对于科学家来说,发表评论总是很重要的

当然是彼此的想法。只是这种特殊的做法并没有

 工作。

我们现在应该做什么?嗯,上个月我发表了一篇论文,我的意思是我上传了

PDF 到互联网。每一篇科学论文都只是互联网上的 PDF 文件。我写的

用普通语言,以便任何人都能理解。我没有任何保留。我什至承认

我忘记了为什么要进行其中一项研究。我在里面放了笑话,因为没有人能告诉我不可以

到。我把所有的材料、数据和代码都上传到每个人都可以看到的地方。我估计

我看起来就像个傻瓜,没有人会注意到,但至少我有

有趣并做我认为正确的事情。

然后,在我告诉任何人这篇论文之前,成千上万的人发现了它,发表了评论

并转发了它。我在那里有一些推文。人们说了好话。我不去

自我夸大地读给你听,但它们就在那里。完全陌生的人给我发电子邮件

周到的评论。终身教授给我发来了想法。 NPR 要求接受采访。这

这篇论文现在比我发表的上一篇经过同行评审的论文获得了更多的观点,那篇论文发表在享有盛誉的杂志上

美国国家科学院院刊。

现在我有预感,会有更多的人从头到尾阅读这篇新论文,

因为最后几段特别得到了很多评论。所以我不知道,

我想这似乎是一个很好的方法。

我不知道科学的未来会是什么样子。也许我们会制作互动论文

在虚拟宇宙中,或者我们会将数据集下载到我们的头脑中,或者将我们的发现低声告诉

在科技舞池里彼此狂欢。不管是什么,都会比

过去 60 年来我们一直在做的事情。为了实现这一目标,我们所要做的就是

我们做得最好。实验。

感谢您的聆听。如果您喜欢这个,如果您喜欢博客上的任何内容,那么真的

你对我最大的帮助就是与其他人分享。所以,一如既往地谢谢你。

你可以随时在评论中找到我,给我发邮件,如果你喊得足够大声,

我会听到的。两周后见。



https://blog.sciencenet.cn/blog-3589443-1420232.html

上一篇:[转载]同行评审使专业阶层将信息把关过程变成了保护他们自身地位的保障 (科技英语,英汉对照)
下一篇:[转载]科学创新的良药—抛弃同行评审 (科技英语,英汉对照)
收藏 IP: 39.152.24.*| 热度|

2 宁利中 杨正瓴

该博文允许注册用户评论 请点击登录 评论 (2 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-12-23 05:45

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部