武夷山分享 http://blog.sciencenet.cn/u/Wuyishan 中国科学技术发展战略研究院研究员;南京大学信息管理系博导

博文

我们给Learned Publishing杂志的一篇投稿获得的审稿意见(2009)

已有 1471 次阅读 2024-6-28 07:56 |个人分类:科学计量学研究|系统分类:观点评述

博主按:2009年,我和学生合写的一篇论文投给了Learned Publishing杂志,获得了详尽的评审意见。评审者实际上指出了本研究设计中的考虑不周,我们就无法通过修改稿件来解决问题了,只能重新设计,重新做一遍。但学生已经毕业,不可能重做了,于是我就放弃了。这是我的英文稿件中,唯一一篇在拿到批评性评审意见后我决定放弃的,其他稿件修改后最终都得以发表。

 

我们给Learned Publishing杂志的一篇投稿获得的审稿意见(2009)

武夷山

 

  1. First and foremost, I believe      that it is out of scope of the general audience of Learned Publishing.      This type of article should be submitted to a more specialized      publication, such as Scientometrics.

  2. The experimental set-up is      inadequate by way of the selection of so few journals (Table 1). A total      of 17 journals, 8 international, 9 national, and then further split into      three categories, is not a representative sample. Comparing JAMA and NEJM      with e.g. Journal of Traditional      Chinese Medicine seems to be a little inappropriate.

  3. The description of the origin      of the citation data is unclear. For instance the time span is listed as      2000 to 2004. It is unclear as to whether this refers to the publication      years, or to the years of citations, or to both.

  4. There is no explanation as to      where the figures quoted for the citing half-life (Table 2) have come      from. It is possible that they originate from one or other of the three      citation indexes mentioned, but no data is presented to substantiate this,      nor to explain over what period of time this represents. Thomson/ISI data      on citing (or cited) half lives does not extend beyond 10 years. If the      value exceeds 10 years, it is depicted as “>10”. The fact that there      are some values in excess of 10 in these figures suggests that the data      does not originate from Thomson/ISI. That the Philosophy of Medicine      journals have citing half-lives intermediate between Medicine and      Philosophy is of little consequence.

  5. There is no description at all      of how the data which were then used to form the citation visualizations      was generated, therefore no conclusions can be reliably formed regarding      these figures. Visualizations of the citation environment are becoming      ever more common in the Scientometric literature; see http://www.leydesdorff.net/jcr04/jcr2pajek.pdf and references      therein for an example of the state of the art. The paper under review is      not a good example of this type of analysis. The citation mapping      procedure is one which has generated enormous debate in the Scientometric      community, see for example: http://users.fmg.uva.nl/lleydesdorff/aca07/index.htm and references      therein, particularly Ahlgren et al., 2004; Bensman, 2004; White, 2003,      2004; Leydesdorff, 2005. To not mention any information as to how the data      was generated is to trivialize the process. Figures 1-5 purport to show a      strengthening of the relationship between the journal sub-sets. Removal of      the 2001 and 2003 figures (Figures 2 and 4) would have increased the      visual contrast, however, it is impossible to confirm this from visual      inspection alone of the figures. As no minimum threshold of similarity has      been applied (or at least stated that it has been applied), and limited      journals are selected, there is always going to be a tendency for the      journals to be connected, and the apparent closeness over time may simply      be an artifact of the data. The author will have created      similarity-measurements which are then fed into the visualization software      (NB the visualization software is not named, but should be). These values      may indeed indicate a strengthening, but from figures 1-5 themselves it is      impossible to say so. The lack of numerical information on the magnitude      of the similarity measures hinders any conclusions from being drawn from      this information. The fact that there are likely to be large differences      in magnitude of citations between the different journals, could lead to a      large degree of between year ‘noise’ in the sample. Comparing the      thickness of the line between JAMA and NEJM, figure 4 shows a very large      difference from the other figures. This is unexpected, and may indicate      some problems with the data.

  6. The data on distribution of author      affiliation does not provide any insight, as it is limited to two journals      only, and only titles from the Philosophy of Medicine sub-group. Further      it is unclear as to how these sets have been defined e.g. under what      criteria, and who did the assignment? No explanation has been provided as      to why only the first author affiliation was used, nor as to why so many      affiliations were classified as ‘other’ for JME.

  7. As per point 4, the comparison      of citing half-lives of the sets of journals suffers from inadequate      numbers of journals, and from no explanation as to the origin of these      figures. Are comparisons between the domestic and international journals      valid? Do the figures come for comparable citation indexes? The      observation that journals from the Philosophy of Medicine sub-group has      the shortest citing half-life of the three groups of journals, is      interesting in that it is different from the observation regarding the      international journals.

  8. As per point 5, no conclusions      can be drawn from the visualization data of the citation networks of the      domestic journals without knowing its origin.

  9. The analysis of share of      references (table 5) between the Philosophy, Medicine, and Philosophy of      Medicine sub-groups is unclear as to how these sets have been defined e.g.      under what criteria, and who did the assignment? The differences in the      share of ‘medicine’ or ‘philosophy’ references may be easily attributable      to differences in the annual output of the Medicine & Philosophy journal, but no mention is made of      this potential confounding factor. NB the authors refer to a Table 8,      where they probably mean Table 5.

  10. The comparisons of author      affiliation for the two domestic Philosophy of Medicine journals is not      the same comparison as per that in Table 2, and no information is provided      as to the criteria for this assignment of affiliation.

  11. The comparison between domestic      and international journals is weak, as we cannot say with certainty that      the citation networks as formulated are comparable. The results discussed      in Table 7 need further explanation as to their origin. Data for the      domestic journals is identical to that presented in Table 5, which is for      a single domestic journal, Medicine      and Philosophy. The international data is from Journal of Medicine & Philosophy. This data was presented      in Table 3, however, in that data there was a fourth categorization of      “Other”, that is not presented in Table 7. Taken together, Table 7 does      not compare domestic with international journals, it compares a single      domestic journal, with a single international journal, and the latter has      had the categorization of “other” removed between Table 3 and 7.

  12. The conclusions are all based      on the limited set of journal data available, and as such are simply not      very strong

  13. The references are entirely      inadequate.

 

 



https://blog.sciencenet.cn/blog-1557-1440041.html

上一篇:复杂性科学领域的基础论文,有哪些?
下一篇:创意工作者是什么样的人
收藏 IP: 1.202.113.*| 热度|

7 高宏 张忆文 葛维亚 尤明庆 杨正瓴 钟炳 郑永军

该博文允许注册用户评论 请点击登录 评论 (1 个评论)

数据加载中...

Archiver|手机版|科学网 ( 京ICP备07017567号-12 )

GMT+8, 2024-11-22 21:06

Powered by ScienceNet.cn

Copyright © 2007- 中国科学报社

返回顶部