Article Text

Download PDFPDF

Meta-analyses and megatrials: neither is the infallible, universal standard
Free
  1. Toshi Furukawa
  1. Professor of Psychiatry, Evidence-Based Psychiatry Center
    (www.ebpcenter.com), Nagoya City University, JAPAN;
    furukawamed.nagoya-cu.ac.jp

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Nowadays, most would agree that we need evidence from randomised control trials (RCTs) to evaluate the effectiveness of a health intervention. It used to be that we did not have enough RCTs in mental health; the irony today is that at times it seems we have too many of them, especially when they draw conflicting conclusions.

A natural solution is to seek “stronger” evidence. Meta-analysis might provide that evidence but, alas, meta-analyses sometimes do not agree among themselves either.1 Another possible solution is a bigger and better trial, a megatrial (also known as the large, simple trial). Unfortunately megatrials and meta-analyses do not always agree either: one group has claimed that—taking megatrials as the gold standard—meta-analyses drew wrong conclusions 35% of the time2; another group estimated the degree of disagreement to be between 10% and 23%.3 Megatrials sometimes do not agree with each other either, and discrepancies among megatrials are just as large as those between meta-analyses and megatrials.4

These discrepancies reinforce a conclusion that the days of dogmatic advocacy of the methodological hierarchy of evidence are over.5

Here, I will take three examples to illustrate that we will always need “good common sense”, coupled with content expertise and an understanding of methodology, to weigh the available evidence relevant to a mental health problem.

Well conducted systematic reviews including megatrials usually offer the best guide to overall treatment effect. For example, in the case of risperidone versus typical antipsychotics for schizophrenia, a very large multinational, multicentre RCT (n = 1362) found no statistically significant difference between these two drugs (RR of no response = 0.94; 95% CI 0.79 to 1.11).6 A subsequent Cochrane review that included an additional 1006 subjects did show, in contrast, a significant and important random effects RR of 0.84 (95% CI 0.76 to 0.92) in favour of risperidone. There was no indication of heterogeneity across trials (p = 0.63).7 It appears that one of the largest trials to date in mental health6 was still underpowered to detect a small yet important difference.

When available studies for meta-analysis are limited in number, sample size, or quality of methodology, we are in a more difficult position. Another Cochrane review concluded that lithium therapy is an efficacious maintenance treatment for bipolar disorder.8 Combining three studies (total n = 412), this review found a statistically significant and clinically meaningful reduction in relapse for patients with bipolar disorder on lithium compared to placebo (random effects RR = 0.60, 95% CI 0.41 to 0.87). Heterogeneity among the included RCTs was not statistically significant (p = 0.13) but substantive (I2 = 51.6%). Although two older studies found lithium to be superior to placebo, the most recent study failed to find a statistically significant difference between the two arms (RR = 0.71, 95% CI 0.39 to 1.31).9 One reasonable conclusion was that the latest study was underpowered and was in fact in concordance with previous studies. Considerable debate ensued after publication of this pivotal study and there was ongoing debate on the methodological adequacy of the older trials with lithium. The superficial interpretation of the more recent study as “negative” seemed to support claims against accepted wisdom in modern psychiatry. This clinical and scientific chagrin abated somewhat when the same group of researchers published a similarly planned maintenance RCT and found a significant reduction in relapse on lithium in comparison with placebo.10 Closer reading of their report reveals, however, that lithium reduced relapse over 12 months only at the expense of increasing dropouts due to adverse events; survival on the medication without relapse or dropout was no different on lithium or on placebo. Only 22% and 16%, respectively, of those starting on lithium or placebo remained on the same drug without relapse until the study termination up to 18 months. The value of lithium appears small at best.

When a systematic review is of inferior quality, we are in an even more difficult position. A meta-analysis of alprazolam for anxiety disorders involving 8878 randomised patients claimed to have confirmed its efficacy.11 Alprazolam may indeed be better than placebo in reducing panic and associated anxiety over 8–12 weeks but we need no more than a well designed, well analysed study of 154 patients to convincingly disqualify alprazolam as drug of choice for anxiety disorders. Aprazolam alone was not as good as exposure therapy alone for the acute phase of treatment, and the addition of alprazolam to exposure therapy resulted in even worse outcomes at follow up than exposure alone.12

Having observed these illustrative cases and having appreciated that a thorough critical reading of a comprehensive meta-analysis is a formidable task, we in the Department of Psychiatry at Nagoya City University tend to examine meta-analysis as a navigator for sound evidence on a clinical topic. Looking at the whole map of available trials in the metaview of the Cochrane Library, we often choose to critically appraise and learn from the best—the largest, the most recent, the best known, the closest to the overall mean, whatever—trial in detail. We find that such practice often brings more insight to the bedside the next day than critically appraising the meta-analysis itself.

The strengths and weaknesses of meta-analyses and megatrials are shown in table 1. We can never arrive at infallible truth because, firstly, that is simply not the nature of scientific knowledge13 and, secondly, in clinical medicine we are dealing with complex, ever changing units of analysis that are people with illnesses.

Table

Strengths and weaknesses of meta-analyses and megatrials

References