|||
随着医学临床研究规模的不断扩大,有许多研究的样本数量不足,可能一些结论不够准确,或者根本无法形成结论,那么将多个小规模的研究进行汇总分析,以获得正确结论的一种研究手段就出现了。1976 年Glass 首次将这一概念命名为Meta-analysis(荟萃分析),并定义为一种对不同研究结果进行收集、合并及统计分析的方法。这种方法逐渐发展成为一门新兴学科 “循证医学”的主要内容和研究手段。荟萃分析的主要目的是将以往的研究结果更为客观的综合反映出来。研究一般不进行原始科学实验研究,而是将其他研究已获得的结果进行综合分析。
显然,荟萃分析存在的基础是获得更合理的解释和结论。应该比原始研究具有更加高的可信度,因此可信度是这种研究方式存在的重要基础。
最近来自《医学自然》杂志的一篇观点类文章对目前的荟萃分析类文献的可信度下降的趋势提出警告,作者呼吁采取更加严格的规范,以避免这种研究方法给该学科甚至临床医学研究带来不良影响。
作者提出三条建议值得参考,首先,规定荟萃分析的样本量,例如不能低于1000例,否则难以获得可靠的结论。确实,荟萃分析本身就是为解决单个研究样本量少的问题,如果荟萃了半天,样本量仍不足,这样的分析不如等等再说。第二,如果针对一个早期临床研究的药物进行荟萃分析,要注意检索一些已经注册的临床研究信息,特别是一些大规模的临床研究,因为这些研究可能仍没有发表,不过由于规模比较大,可能会对最终的结论产生比较大的贡献或更正。第三、要对引用文献在方法、样本量等方面要有一定标准要求,宁缺勿滥是这类研究最重要的要求。
关于该类型文献的相关资料
1、 各个国家发表数量统计
中国学者相对文献的数量不多,初步提示这个问题不是十分严重。
国家/地区 |
记录 |
百分比 |
USA |
7860 |
37.126 |
ENGLAND |
3341 |
15.781 |
GERMANY |
2133 |
10.075 |
CANADA |
1930 |
9.116 |
PEOPLES R CHINA |
1435 |
6.778 |
NETHERLANDS |
1353 |
6.391 |
ITALY |
1298 |
6.131 |
AUSTRALIA |
1215 |
5.739 |
FRANCE |
1198 |
5.659 |
SPAIN |
797 |
3.765 |
2、近20年该类文献增长趋势
从增长情况看,增长的趋势非常明显,也不怪有人提出担心,临床研究的规模并没有增加这么快吧。也就是原始研究没有增加这么快,而分析文献的增加很容易让人联想到灌水。
关于荟萃分析的背景介绍。荟萃分析,又称“Meta 分析”,Meta意指较晚出现的更为综合的事物,而且通常用于命名一个新的相关的并对原始学科进行评论的学问,不但包括数据结合,而且包括结果的流行病学探索和评价,以原始研究的发现取代个体作为分析实体。荟萃分析产生的主要的理由是:对于多个单独进行的研究而言,许多观察组样本过小,难以产生任何明确意见。根据荟萃分析所依据的基础或数据来源可以将其分为三类:文献结果荟萃分析;综合或合并数据荟萃分析;独立研究原始数据荟萃分析。
(Meta)analyze this: Systematic reviews might lose credibility
Peter Humaidan
& Nikolaos P Polyzos
Affiliations
Nature Medicine 18,1321(2012)doi:10.1038/nm0912-1321Published online 07 September 2012
Doctors and regulatory agencies rely on meta-analyses when setting clinical guidelines and making decisions about drugs. However, as the number of these analyses increases, it's clear that many of them lack robust evidence from randomized trials, which may lead to the adoption of treatment modalities of ambiguous value. Without a more disciplined approach requiring a reasonable minimum amount of data, meta-analyses could lose credibility.
Peter Humaidan/Nikolaos P Polyzos
A well-performed meta-analysis can revive treatment options once considered ineffective or reveal the drawbacks of practices previously considered the gold standard. For example, initial reports suggested that the use of erythropoiesis-stimulating agents in patients with cancer could perhaps reduce the number of patients in need of blood transfusion due to anemia by as much as half1. But a recent meta-analysis including almost 14,000 patients from 53 trials demonstrated that this treatment in fact increases mortality by 17% up to 28 days from the end of the active study phase2. Thus, it is clear that the role of meta-analyses can be crucial in everyday clinical practice.
Unfortunately, not all meta-analyses examine such a vast amount of literature to offer insight. Some build on very scant information, and, as such, conducting a meta-analysis has become an easy way to get published. Worse, some people choose to write a meta-analysis during the early days of an interventional treatment, when the field has had little time to amass data. This is definitely not in line with a primary goal of meta-analysis: to provide solutions in contradictory domains.
A simple insight into the deviation from the initial concept of meta-analysis can be gained by scrutinizing PubMed for articles published in July 2012 in the Cochrane Database of Systematic Reviews, the largest registry of systematic reviews. Among the 61 systematic reviews published during this one month, 15% of the reviews included one or zero trials, the latter stating the lack of data necessary to do the analysis. In addition, half of the systematic reviews in the issue included fewer than 1,000 randomized patients. Furthermore, half of those published in the July issue were updated reviews, previously published between 2000 and 2012. Interestingly, 11 of these 31 updated reviews included the same number of trials and participants as the previous review they sought to bring up to date.
It is a widespread issue that goes beyond any one journal or discipline. Over the last decade, the number of meta-analyses in biomedical sciences has exploded. The number of reports in PubMed classified under the meta-analysis 'publication type' grew from 849 in 2000 to 4,720 in 2011—a fivefold increase. The explosion in the number of meta-analyses cannot be interpreted as substantial progress, as in many cases it is linked to analyses that include few studies, with a limited number of participants, and updates of systematic reviews, which add nothing new.
Several key steps are necessary to ensure that the flood of meta-analyses does not water down the quality of these reports. First and foremost, authors should be dissuaded from conducting meta-analyses with a restricted number of trials, including a limited number of participants. They must resist the urge to write up a meta-analysis merely to feed their own scientific impact number. It is exceptionally hard to set definitive benchmarks for how much data to include in a systematic review, as the parameters can vary among disciplines and interventions, but as a general starting point one might hope that the report include an analysis of at least three or four trials with a minimum of 1,000 participants in total for common diseases for which large trials are feasible. There will, of course, be worthwhile reviews that do not meet these targets—they simply serve to function as a starting point for discussion.
Second, scientists setting out to conduct a meta-analysis early in the lifetime of a drug's development should check registries such as ClinicalTrials.gov to see whether any large studies will be delivering data in the near future. If there are results from big trials on the way, it is better to wait for that information before completing the meta-analysis. We also assert that a meta-analysis should definitely not be updated if there are no new data to add from more recent trials.
Even if there are a number of trials to analyze, authors need to think twice about going ahead with the meta-analysis if the trials themselves are small. As explained in the most-downloaded article in the history of Public Library of Science, “Why most published research findings are false”3 from Stanford University School of Medicine's John Ioannidis, “a research finding is less likely to be true when the studies conducted in a field are smaller and when the effect sizes are smaller.” Meta-analyses built on such small trials have a rocky foundation. Obviously there are exceptional circumstances, such as the study of a treatment for a rare disease, where patient numbers are small and scientists have no choice but to deal with small numbers. In these cases, utmost attention should be given to the statistical methods applied and their limitations.
For their part, medical editors and reviewers must also be called into action. Irrespective of the impact of the journal, common and stricter criteria should be adopted as to when and how meta-analyses should be accepted. Increased citations and higher impact factors do not necessarily signify scientific merit. On the contrary, plausible scientific evidence, more likely to be replicated and valid in the future, must become a priority and should guide decision-making when it comes to accepting a meta-analysis.
Finally, the most important step toward high-quality medical research in meta-analysis is coordinated action by the entire scientific community. Biomedical researchers should apply stricter criteria when deciding which meta-analyses they cite, paying closer attention to details such as the methodology and the number of trials and patients included. Only conjointly will these actions preserve the meta-analysis as an important tool for decision-making to the benefit of patients and clinicians.
Archiver|手机版|科学网 ( 京ICP备07017567号-12 )
GMT+8, 2024-12-23 00:37
Powered by ScienceNet.cn
Copyright © 2007- 中国科学报社