Bobby的个人博客分享 http://blog.sciencenet.cn/u/Bobby

博文

什么是发表后同行评议(post-publication peer-review)?

已有 32752 次阅读 2009-3-13 07:04 |个人分类:科学感想|系统分类:观点评述| 出版后同行评议, post-publication, peer-review

 

袁贤讯的博客《后同行评议(post-peer review)中的道德(Ethics)问题》

http://www.sciencenet.cn/m/user_content.aspx?id=219925

 

王志明的博客《后同行评议时代的到来》

http://www.sciencenet.cn/blog/user_content.aspx?id=219943

 

注:“后同行评议(post-peer review)”严格来讲应该是“出版后(或发表后)同行评议(或评价)(post-publication peer-review)”。

 

一、出版(或发表)后同行评议的必要性

读者可阅读下面的文章:

Are journals doing enough to prevent fraudulent publication?

出处:http://www.cmaj.ca/cgi/content/full/174/4/431

FDP文件:Are journals doing enough to prevent fraudulent pu

Recent warnings by editors of 3 major journals that data contained in published papers were or may have been incomplete,1 falsified2 or fabricated3 has dismayed scientists and scientific editors around the world and added to the public's growing scepticism about the authority of science. How is it that flawed or fraudulent research can slip through the net of peer review and editorial scrutiny?

Reputable scientific journals use a systematic approach to reviewing and editing research papers. At CMAJ, submissions that are not intercepted after an initial screening for suitability and relevance are sent for peer review. Reviewers are chosen on the basis of their interest and expertise, publication record, and quality of previous reviews. Peer reviewers devote perhaps a few hours to reading the paper, consulting the existing literature and writing their review. About 20% of the completed reviews we receive are rated as excellent; we generally succeed in obtaining 2 "good" or "excellent" reviews for each manuscript.

After peer review, submissions are carefully reassessed by the scientific editors, and about 6% are selected for publication. Almost all require substantive editing, guided by a scientific editor working closely with the authors. Once this process of revision is complete, the manuscript is copyedited for clarity, precision, coherence and house style. Problems with the presentation and interpretation of data can come to light at any point in this process, even at the late stage of copyediting.

For the most part, this intensive series of editorial check-points works well. But it is not perfect. In 2005, PubMed received 67 notices of article retractions (Sheldon Kotzin, National Library of Medicine; personal communication, 2006.) This is undoubtedly an underestimate of the total number of flawed, grossly misleading or frankly fraudulent papers.

Editors (and peer reviewers) work from the submitted manuscript along with any other material supplied by the authors (e.g., survey instruments or additional tables, graphs and figures). In assessing randomized clinical trials, most editors examine the study protocol to try to ensure that the study report reflects the planned design and analysis. However, it is almost impossible to detect by these processes whether data have been fabricated, or if key elements are missing. Editors, particularly of general journals, rarely have the expertise in the particular topic of the research to enable them even to suspect fabrication when it occurs. Reviewers may have the expertise but not necessarily the time to examine findings in exhaustive detail; moreover, they can assess only those data that the authors actually disclose.

  Alarmed by their own experiences with particular manuscripts, some journals are taking further steps to ensure that authors are faithful to their data. For example, the Journal of the American Medical Association (JAMA) now requires independent statistical re-analysis of the entire raw data set of any industry-sponsored study in which the data analysis has been conducted by a statistician employed by the sponsoring company.4 The Journal of Cell Biology (www.jcb.org) has specific policies prohibiting the enhancement of images and scrutinizes submitted images for evidence of manipulation. It will be important to evaluate the effectiveness of these measures as time goes on, since their costs in time and resources are not trivial.

  At CMAJ we are contemplating the steps that would be required to allow us to make available, as an online-only appendix, the entire data set on which a research paper is based. Doing so would enable more intensive post-publication peer review. Interested persons with the necessary expertise could confirm the published analyses, conduct further analyses and increase the efficiency of research by making it more widely used. Fraud might also be detected sooner, and perhaps the knowledge that their data set will be open to public scrutiny will deter some authors from fabricating or falsifying data (if it does not make others more clever in their deceits). Current online publishing systems enable authors to readily supplement their articles with data sets in any file format (spreadsheets, databases, jpegs, etc.) and to index these files for proper attribution and with helpful information (e.g., the open source Open Journal Systems at http://pkp.sfu.ca/ojs; Dr. John Willinsky, University of British Columbia; personal communication, 2006). The costs of posting additional data as appendices to manuscripts are trivial, and the ethical and legal obstacles (rendering the data anonymous when they involve patients, and protecting the intellectual property rights of investigators and sponsors) can be overcome.5

  No editorial review system will ever be entirely impermeable to human error or deceipt. But journals could do more to ensure the integrity of published scientific results; one place to start might be to publish all of the data on which research findings are based. ― CMAJ

  

REFERENCES

  Bombardier C, Laine L, Reicin A, et al. Comparison of upper gastrointestinal toxicity of rofecoxib and naproxen in patients with rheumatoid arthritis. VIGOR Study Group. N Engl J Med 2000;343:1520-8.

  Hwang WS, Roh SI, Lee BC, et al. Patient-specific embryonic stem cells derived from human SCNT blastocysts. Science 2005;308:1777-83.

  Sudb? J, Lee JJ, Lippman SM, et al. Non-steroidal anti-inflammatory drugs and the risk of oral cancer: a nested casecontrol study. Lancet 2005;366:1359-66.

  Fontanarosa PB, Flanagain A, DeAngelis CD. Reporting conflicts of interest, financial aspects of research and role of sponsors in funded studies. JAMA 2005;294:110-1.

Walter C, Richards EP. Public access to scientific research data. Available: http://biotech.law.lsu.edu/IEEE/ieee36.htm (accessed 2006 Jan 13).

 

 

二、PLoS挑战传统的出版(发表后)后同行评议模式

有关PLoS读者可参阅我的博文《关注科技期刊新贵——PLoS系列免费获取在线期刊》:http://www.sciencenet.cn/m/user_content.aspx?id=51127

PLoS系列期刊与绝大多数期刊不同的是,PLoS One发表任何方法上可行的论文,而不在乎研究结果的重要性,审稿人只核查论文中的实验方法和分析是否有明显、严重的错误。PLoS One认为论文的重要性体现在发表后被关注和引用的情况。读者可以在网上对PLoS One的每篇论文进行评论和评分,编辑根据这些反馈情况鉴别并推荐出重要论文。PLoS One的一位编辑说:“我们努力让期刊的论文成为讨论的起点而不是终点。”(http://cbb.upc.edu.cn/showart.asp?art_id=84&cat_id=4)。

       关于这方面,王丹红和何姣在《挑战传统:先发表 后评价》采访张曙光时,美国麻省理工学院高级研究员张曙光说“这是非常好的主意。论文的发表意味着真正的评判才开始,而不是结束。如果论文真的很好,大家知道得就更快,可以节省很多时间、精力和金钱。同样,如果一篇论文有问题或是造假,那么也能很快被发现。从长远来看,这有利于科学的发展。”(见,2007-2-1 8:51:5http://www.sciencenet.cn/htmlnews/200721102729932823.html)。

 

三、出版(或发表)后同行评议可行吗?

读者可阅读下面的文章:

Can post publication peer review work? The PLoS ONE report card

来源:http://blog.openwetware.org/scienceintheopen/2008/08/27/can-post-publication-peer-review-work-the-plos-one-report-card/

 

This post is an opinion piece and not a rigorous objective analysis. It is fair to say that I am on the record as and advocate of the principles behind PLoS ONE and am also in favour of post publication peer review and this should be read in that light. [ed I’ve also modified this slightly from the original version because I got myself mixed up in an Excel spreadsheet]

To me, anonymous peer review is, and always has been, broken. The central principle of the scientific method is that claims and data to support those claims are placed, publically, in the view of expert peers. They are examined, and re-examined on the basis of new data, considered and modified as necessary, and ultimately discarded in favour of an improved, or more sophisticated model. The strength of this process is that it is open, allowing for extended discussion on the validity of claims, theories, models, and data. It is a bearpit, but one in which actions are expected to take place in public (or at least community) view. To have as the first hurdle to placing new science in the view of the community a process which is confidential, anonymous, arbitrary, and closed, is an anachronism.

It is, to be fair, an anachronism that was necessary to cope with rising volumes of scientific material in the years after the second world war as the community increased radically in size. A limited number of referees was required to make the system manageable and anonymity was seen as necessary to protect the integrity of this limited number of referees. This was a good solution given the technology of the day. Today, it is neither a good system, nor an efficient system, and we have in principle the ability to do peer review differently, more effectively, and more efficiently. However, thus far most of the evidence suggests that the scientific community dosen’t want to change. There is, reasonably enough, a general attitude that if it isn’t broken it doesn’t need fixing. Nonetheless there is a constant stream of suggestions, complaints, and experimental projects looking at alternatives.

The last 12-24 months have seen some radical experiments in peer review. Nature Publishing Group trialled an open peer review process. PLoS ONE proposed a qualitatively different form of peer reivew, rejecting the idea of ‘importance’ as a criterion for publication. Frontiers have developed a tiered approach where a paper is submitted into the ’system’ and will gradually rise to its level of importance based on multiple rounds of community review. Nature Precedings has expanded the role and discipline boundaries of pre-print archives and a white paper has been presented to EMBO Council suggesting that the majority of EMBO journals be scrapped in favour of retaining one flagship journal for which papers would be handpicked from a generic repository where authors would submit, along with referees’ reports and author’s response, on payment of a submission charge. Of all of these experiments, none could be said to be a runaway success so far with the possible exception of PLoS ONE. PLoS ONE, as I have written before, succeeded precisely because it managed to reposition the definition of ‘peer review’. The community have accepted this definition, primarily because it is indexed in PubMed. It will be interesting to see how this develops.

PLoS has also been aiming to develop ratings and comment systems for their papers as a way of moving towards some element of post publication peer review. I, along with some others (see full disclosure below) have been granted access to the full set of comments and some analytical data on these comments and ratings. This should be seen in the context of Euan Adie’s discussion of commenting frequency and practice in BioMedCentral journals which broadly speaking showed that around 2% of papers had comments and that these comments were mostly substantive and dealt with the science. How does PLoS ONE compare and what does this tell us about the merits or demerits of post publication peer review?

PLoS ONE has a range of commenting features, including a simple rating system (on a scale of 1-5) the ability to leave freetext notes, comments, and questions, and in keeping with a general Web 2.o feel the ability to add trackbacks, a mechanism for linking up citations from blogs. Broadly speaking a little more than 13% (380 of 2773) of all papers have ratings and around 23% have comments, notes, or replies to either (647 of 2773, not including any from PLoS ONE staff) . Probably unsurprisingly most papers that have ratings also have comments. There is a very weak positive correlation between the number of citations a paper has received (as determined from Google Scholar) and the number of comments (R^2 = 0.02, which is probably dominated by papers with both no citations and no comments, which are mostly recent, none of this is controlled for publication date).

Overall this is consistent with what we’d expect. The majority of papers don’t have either comments or ratings but a significant minority do. What is slightly suprising is that where there is arguably a higher barrier to adding something (click a button to rate versus write a text comment) there is actually more activity. This suggests to me that people are actively uncomfortable with rating papers versus leaving substantive comments. These numbers compare very favourably to those reported by Euan on comments in BioMedCentral but they are not yet moving into the realms of the majority. It should also be noted that there has been a consistent  programme at PLoS ONE with the aim of increasing the involvement of the community. Broadly speaking I would say that the data we have suggest that that programme has been a success in raising involvement.

So are these numbers ‘good’? In reality I don’t know. They seem to be an improvement on the BMC numbers arguing that as systems improve and evolve there is more involvement. However, one graph I received seems to indicate that there isn’t an increase in the frequency of comments within PLoS ONE over the past year or so which one would hope to see. Has this been a radical revision of how peer review works? Well not yet certainly, not until the vast majority of papers have ratings, but more importantly not until we have evidence that people are using those ratings. We are not yet in a position where we are about to see a stampede towards radically changed methods of peer review and this is not surprising. Tradition changes slowly - we are still only just becoming used to the idea of the ‘paper’ being something that goes beyond a pdf, embedding that within a wider culture of online rating and the use of those ratings will take some years yet.

So I have spent a number of posts recently discussing the details of how to make web services better for scientists. Have I got anything useful to offer to PLoS ONE? Well I think some of the criteria I suggested last week might be usefully considered. The problem with rating is that it lies outside the existing workflow for most people. I would guess that many users don’t even see the rating panel on the way into the paper. Why would people log into the system to look at a paper? What about making the rating implicit when people bookmark a paper in external services? Why not actually use that as the rating mechanism?

I emphasised the need for a service to be useful to the user before there are any ’social effects’ present. What can be offered to make the process of rating a paper useful to the single user in isolation? I can’t really see why anyone would find this useful unless they are dealing with huge number of papers and can’t remember which one is which from day to day. It may be useful within groups or journal clubs but all of these require a group to sign up.  It seems to me that if we can’t frame it as a useful activity for a single person then it will be difficult to get the numbers required to make this work effectively on a community scale.

In that context, I think getting the numbers to around the 10-20% level for either comments or ratings has to be seen as an immense success. I think it shows how difficult it is to get scientists to change their workflows and adopt new services. I also think there will be a lot to learn about how to improve these tools and get more community involvement. I believe strongly that we need to develop better mechanisms for handling peer review and that it will be a very difficult process getting there. But the results will be seen in more efficient dissemination of information and more effective communication of the details of the scientific process. For this PLoS, the PLoS ONE team, as well as other publishers, including BioMedCentral, Nature Publishing Group, and others, that are working on developing new means of communication and improving the ones we have deserve applause. They may not hit on the right answer first off, but the current process of exploring the options is an important one, and not without its risks for any organisation.

Full disclosure: I was approached along with a number of other bloggers to look at the data provided by PLoS ONE and to coordinate the release of blog posts discussing that data. At the time of writing I am not aware of who the other bloggers are, nor have I read what they have written. The data that was provided included a list of all PLoS ONE papers up until 30 July 2008, the number of citations, citeulike bookmarks, trackbacks, comments, and ratings for each paper. I also received a table of all comments and a timeline with number of comments per month. I have been asked not to release the raw data and will honour that request as it is not my data to release. If you would like to see the underlying data please get in contact with Bora Zivkovic.

 

 



学术论剑
https://blog.sciencenet.cn/blog-39731-219983.html

上一篇:了解科学与人文关系一定要看的一篇文章
下一篇:叶芝及他的诗《外衣》
收藏 IP: .*| 热度|

11 高友鹤 张志东 王桂颖 刘玉平 任胜利 陈绥阳 王德华 肖重发 钟炳 马丽丹 迟菲

发表评论 评论 (4 个评论)

数据加载中...
扫一扫,分享此博文

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-11-23 18:19

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部