Welcome to our first blog commentary. One of the purposes of the blog is to generate discussion about issues in health care. This “Clinical Commentary” section is an invitation to our housestaff and faculty to submit their own thoughts and viewpoints on current issues. The views expressed in this section are soley those of the authors and do not necessarily represent the views of Clinical Correlations.
Commentary by Gregory Mints MD and Nirav Shah MD, MPH
The meta-analysis of Rosiglitazone’s effect on cardiovascular events by Nissen(1) had the effect of an exploding bomb in both the lay and medical media. Unfortunately, much of the ensuing discussion had relatively little to do with the quality of the paper itself (2), with disproportionate attention to the failure of drug safety oversight in general and to the attempts at assigning blame for it on the manufacturer of Rosiglitazone and/or the FDA(3, 4). It thus appears that the paper has become a political leverage tool in the fight over the future direction of drug oversight in this country. We contend that the concerns about the medication approval process in the U.S. and the impact of drug manufacturers on that process, however important and acute, should not interfere with objective analysis of the presented studies. We do not believe that the ends justify the means (i.e. wrong arguments are o.k. for the right reasons), but also think that politicizing the data interpretation is harmful to the cause of reforming the relationships among the FDA, the pharmaceutical companies and the consumers. Most of our thoughts on this issue came about in the discussions we had with the two other members of our faculty: Drs. Natalie Levy and Tanping Wong. It is our opinion that the meta-analysis in question is of extremely poor methodological quality, which precludes any meaningful interpretation of the data. We therefore believe that no change in current existing practice is warranted, a conclusion supported by a recent editorial in the Lancet (5).
Meta-analysis is a technique of combining results of individual, usually small, studies into one combined estimate. When variability among the results of individual studies can be attributed to chance alone (lack of heterogeneity), these results can be combined in a single estimate of the effect with greater certainty (narrower confidence intervals) than in the individual studies. It is, however, a relatively rare occurrence that heterogeneity is absent. When substantial differences exist among the study results (that is, heterogeneity is present) individual results cannot be pooled together. Unexpectedly for most people not well-versed in meta-analytic inquiry, this is precisely when the technique becomes most powerful and useful. A procedure called meta-regression exists which explores the features of the individual studies responsible for the observed heterogeneity of effect. For example it may transpire that studies including older patients or patients with pre-existing cardiovascular disease consistently show higher mortality than others. The insight gained in such analyses is then used to design proper RCTs. The role of the meta-analytic approach is therefore mostly in generating new hypotheses, to be later tested by better, more robust methods (RCTs). Meta-analysis is simply one form of observational study, like case-control or cohort studies, with the unit of analysis being each included study.
Importantly, the summary estimates obtained by pooling results of individual studies (which can only be done when no heterogeneity is present) generally tend to be more optimistic than those obtained from RCTs, and the confidence intervals they report tend to be narrower. Consequently, meta-analysis is expected to generate some “false positive” results, i.e., it will sometimes find a significant difference between two groups when no such difference in reality exists. It follows that meta-analysis as a method of investigation is presumed to always be inferior to a well designed large randomized study. It becomes useful mostly when no large randomized study addressing the issue is available (and is not expected to become available in the near future). This was not the case with Rosiglitazone, for which a large randomized trial (RECORD) specifically addressing its cardiovascular safety is underway.
Because of the suboptimal “specificity” of MAs it is critical that rigorous procedures are adhered to in both conducting and reporting meta-analyses. At least two massive efforts have resulted in guidelines on how MA should be performed and reported: Cochrane Review (6) and the Quality of Reports of Meta-analyses of randomized controlled trials (QUOROM) statement (7). Most major journals formally require their contributors to follow one of these guidelines, for example Annals of Internal Medicine explicitly requires their meta-analyses be compliant with QUOROM(8), while the Lancet requires compliance with Cochrane standards (9). It is interesting that NEJM has no such formal requirements.
Meta-analysis can perform neither magic nor miracles. It should not be used to combine “apples and oranges,” no matter what statistical contortions are used. This refers to biological heterogeneity, which is separate and unrelated to statistical heterogeneity. The latter can be assessed by statistical methods, while the former is assessed by common sense. In the Rosiglitazone meta-analysis, trials of diabetic patients were combined with those of pre-diabetics, which should be met with immediate skepticism. Control groups in the included studies differed widely (placebo, or any combination of other anti-diabetic meds), again, making the results biologically heterogeneous. By the same token, crude risks (instead of event rates) from short-term (e.g. 24 week) and from long-term (e.g. 4 years) studies should not be combined, as this makes no clinical sense.
The most important part of the meta-analysis is a systematic review. Search strategies, inclusion and exclusion criteria and reasons for each study to be counted in or out must be explicitly stated. This ensures reproducibility and, equally importantly, allows for the estimation of what is called ‘publication bias.’ The latter refers to the tendency to report only positive (or only negative) results. This will obviously skew the results of a meta-analysis, is very common, and can be at least partially corrected by several statistical methods. The NEJM paper not only does not list reasons for excluding individual studies, but also ignores the possibility of publication bias entirely.
One statement from the Methods section of the paper deserves particular mention: “A P value of more than the nominal level of 0.10 for the Q statistic indicated a lack of heterogeneity across trials, allowing for the use of a fixed-effects model.” There are at least 4 problems with this one statement alone:
1. The Q statistic is a method of assessing statistical heterogeneity among the studies included in a meta-analysis. It is not a very robust statistic (that is, it is not very sensitive), meaning it will tend to NOT show heterogeneity among the studies, even when it is present. Good practice in meta-analysis is to use more than one method of assessing heterogeneity. If all the methods are congruent then heterogeneity is likely absent. The use of a Q statistic likely underestimated the existing heterogeneity.
2. Because the Q statistic is so weak, the standard cut-off used is usually 0.2, not 0.1 or 0.05, as we are used to for most p-values. This means that if there is a 20% or greater chance (by this statistic) of the results being heterogeneous, one should conclude that there is in fact heterogeneity present. Not only did the study by Nissen use a cut-off of only 10%, it also did not report the actual obtained value of Q.
3. Once heterogeneity has been demonstrated among the results of pooled studies, they represent ‘apples’ and ‘oranges’ and MAY NOT be pooled (with either fixed-effect models, random-effect models or divine intervention).
4. There are two basic ways to pool together results of individual studies: fixed-effect models and random-effect models. Without going into too much detail, the fixed effect model is the more optimistic of the two, that is, it will tend to show narrower confidence intervals. Consequently, results so pooled may show statistically significant difference between the two groups, when no such difference exists. Usually, pooling is done by both methods and if the results are congruent, they are considered likely real.
Perhaps the least proper methodology is contained in table 4 of the NEJM paper. In this table the first entry is a meta-analysis (pooled point estimate) conducted separately for only the small trials. The result of this meta-analysis was then pooled together with results of two individual large studies (DREAM and ADOPT). There are several reasons why this is inappropriate. First, and most importantly, pooling meta-analysis results together with the results of individual studies makes no sense, as only like things can be combined. Second, the statistics used are inappropriate. In a meta-analysis, results of individual studies are weighted differently. The smaller the confidence interval and the larger the sample size the greater the weight of the study in the pooled estimate. The trick used in the paper assigns inappropriately large weight to the small studies.
Last, but not least, is a disclosure of financial interests for the author of the paper. The statement does a good job of painting a picture of an objective unsolicited scientist without financial ties to any special interest group: “Dr. Nissen consults for many pharmaceutical companies but requires them to donate all honoraria or consulting fees directly to charity so that he receives neither income nor a tax deduction.” In reality, Dr. Nissen receives substantial grants from Takeda (makers of a competitor drug Pioglitazone), which he is NOT giving to charity. We do not suggest that he should, but the author’s affiliation must be transparent. It is also interesting that Pioglitazone, for which a randomized controlled study has shown a reduction in CV event rates(10), is essentially exempt from any “drug class” generalizations of the results of this published paper. This political opportunism comes at a cost. Tens of thousands of patients are panicking and thousands of physicians have to deal with the panic. Most regrettably, patients are reportedly dropping out of the ongoing RECORD study(12), making it likely that we will never know the real cardiac risk associated with Rosiglitazone use.
In the end, Rosiglitazone may very well end up being a dangerous, a beneficial, or an irrelevant drug. The published meta-analysis, however, does nothing to further our understanding of the issue, and in our opinion, should be completely ignored. Most importantly, academic integrity should not be compromised even for a worthy cause.
1. Nissen SE, Wolski K. Effect of Rosiglitazone on the Risk of Myocardial Infarction and Death from Cardiovascular Causes. New England Journal of Medicine 2007:NEJMoa072761. http://content.nejm.org/cgi/content/abstract/NEJMoa072761v1
2. Psaty BM, Furberg CD. Rosiglitazone and Cardiovascular Risk. N Engl J Med %R 101056/NEJMe078099 2007:NEJMe078099. http://content.nejm.org/cgi/content/extract/NEJMe078099v1
3. Psaty BM, Furberg CD. The Record on Rosiglitazone and the Risk of Myocardial Infarction. N Engl J Med %R 101056/NEJMe078116 2007:NEJMe078116. http://content.nejm.org/cgi/content/extract/NEJMe078116v1
4. Nathan DM. Rosiglitazone and Cardiotoxicity — Weighing the Evidence. N Engl J Med %R 101056/NEJMe078117 2007:NEJMe078117. http://content.nejm.org/cgi/content/extract/NEJMe078117v1
5. Anonymous. Rosiglitazone: seeking a balanced perspective. Lancet 2007;369(9576):1834.
6. Cochrane Handbook for Systematic Reviews of Interventions. (Accessed June 5, 2007, at http://www.cochrane.org/resources/handbook/hbook.htm.)
7. Moher D, Cook D, Eastwood S, Olkin I, Rennie D, Stroup D. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet 1999;354(9193):1896-900.
8. Annals of Internal Medicine. Information for Authors. (Accessed June 5, 2007, at http://www.annals.org/shared/author_info.html.)
9. The Lancet. Information for Authors. (Accessed June 5, 2007, at http://www.thelancet.com/authors/lancet/authorinfo.)
11. Avorn J. Paying for Drug Approvals — Who’s Using Whom? N Engl J Med R 101056/NEJMp078041 2007;356(17):1697-700. http://content.nejm.org/cgi/content/extract/356/17/1697
12. Saul S. Test of Drug for Diabetes in Jeopardy. The New York Times 2007 May 26, 2007.
Image courtesy of Wikimedia Commons