Meta analysis is a way of combining results from two or more independent studies. The advantage of combining results of randomised controlled trials is to increase statistical power to detect intervention effects and to increase the precision of an estimate of effect. Meta analysis of results of several underpowered trials has been used to establish good evidence of a clinically important effect (1). The first step in meta analysis is to obtain data from all of the studies that have previously addressed the research question (usually by conducting a systematic review). The estimates from each study (e.g., odds ratios or risk ratios for binary outcomes, or mean difference for quantitative outcomes) are then averaged, taking account of the precision of each study estimate.
There are several methodological issues to consider when undertaking a meta analysis. Larger studies and studies which find positive results are more likely to be published, and therefore included. Publication bias can be assessed using a Funnel plot (shown in the figure below) where results of individual trials are plotted against the standard error of the effect estimated (2).
The asymmetry shows that smaller studies are more likely to estimate a positive effect (here a positive mean difference in servings of fruit and vegetables consumed each day by participants receiving an e-Learning intervention). The orange line shows a regression of effect size on standard error of the effect, weighted by the inverse of the variance of the effect estimate.
Methods have been developed to correct estimates for publication bias (3). A comprehensive evaluation has shown that the Copas selection model (4) can provide a useful summary in meta-analyses (5) and is preferable to the trim-and-fill method to adjust for bias in meta-analysis.
1. Antman EM, Lau J, Kupelnick B, Mosteller F, Chalmers TC. A comparison of results of meta-analyses of randomized control trials and recommendations of clinical experts. Treatments for myocardial infarction. JAMA 1992; 268: 240–248.
2. Harbord RM, Egger M, Sterne JA. A modified test for small-study effects in meta-analyses of controlled trials with binary endpoints. Statistics in Medicine 2006;25:3443-3457.
3. Rothstein HR, Sutton AJ, Borenstein M. Publication bias in meta analysis: prevention, assessment and adjustments. Wiley, Chichester, 2005.
4. Copas JB, Shi JQ. A sensitivity analysis for publication bias in systematic reviews. Statistical Methods in Medical Research 2001;10:251–265.
5. Carpenter JR, Schwarzer G, Rücker G, Künstler R. Empirical evaluation showed that the Copas selection model provided a useful summary in 80% of meta-analyses. Journal of Clinical Epidemiology 2009;62:624–631.