Statistical Meta-Analysis: Potential for New Research Opportunities

In the age of evidence based decision making through systematic reviews of the literature, the statistical meta-analysis has been extensively used to synthesise published summary data on a particular topic of interest from a number of independent studies in order to allow researchers to reach more credible and scientifically valid conclusions.

The method is being increasingly used in scientific decision making due to its ability to combine data from various studies undertaken to estimate a common underlying effect size. Although it is extensively used in clinical and public health studies, meta-analysis is widely used in business, psychology, education, epidemiology, biometry, environment, agriculture and many other disciplines. The Cochrane Collaboration publishes thousands of meta-analytic studies on the health and medical related topics and the Campbell Collaboration publishes many others on key aspects of other fields of research in modern life.

Statistical meta-analysis deals with a variety of sophisticated methods to effectively and efficiently combine the results of several independent studies all sharing a common underlying effect. It is indeed about statistical methods applied to published statistics of independent studies. By combining aggregate information meta-analysis ensures higher statistical power for the effect measure of interest due to increased sample size. For the credibility of any meta-analysis researchers must make choices on many factors that can affect its results, including the quality of the studies to be included, exhaustive literature searches to ensure coverage of the relevant studies, selecting studies based on a set of objective criteria, dealing with incomplete or inconsistent data, methods used to analyse the data, and the handling of publication bias.

The main objective of meta-analysis is to estimate the underlying common effect size for an intervention of interest expressed as a pooled statistic, perform test on appropriate hypotheses, and construct confidence interval for the common population parameter. The meta-analytic methods are used to deal with effect measures related to binary outcome variables such as relative risk, risk ratio or odds ratio, as well as those related to continuous outcome variables such as the standardised mean difference or weighted mean difference. For the construction of confidence intervals the distribution of the effect size is assumed to be normally distributed.

There are several issues concerning the quality and type of the studies, heterogeneity among the estimates of the effect size, publication bias and methodology that need further consideration to improve the credibility of meta-analysis.

Since all meta-analytical methods are based on weighted averaging of study effects, the estimation methods/models differ on the choice of implementation of these weights. The most “natural” system of weights is of course equal weighting but that may lead to paradoxical results and may not be most efficient in terms of the variance and MSE of the resulting estimator.

Empirical weighting has therefore become the norm with two forms predominating in the literature. The first form is known as the fixed effect (FE) model where empirical weights are the inverse variance weights which adjust for the contribution of variance due to random error. When there is heterogeneity of effects across studies, this estimator exhibits over-dispersion and to remedy this situation the random effects (RE) weights were proposed. These RE weights adjust study variance by adding a constant RE variance component to each study variance before computing a modified inverse variance weight. Researchers commonly call it the random effects model, and this is the recommended approach by several organisations such as the Cochrane Collaboration. The problem however is that this estimator, though resulting in a wider confidence interval compared to the standard inverse variance weighted estimator still exhibits over-dispersion.

Therefore, both the FE and RE estimation methods commonly used in the scientific literature underestimate the statistical error if there is heterogeneity among the studies. To remedy this situation, some researchers have called for a better alternative to the random effects estimator, namely the inverse variance heterogeneity (IVhet) estimator. The latter is a variant of the quality effects model of meta-analysis and none of these newly developed estimators suffer from the problem of over-dispersion.

In the upcoming workshop on “Statistical Meta-Analysis: Methods and Applications” to be held from 16-17 June 2015 at the Ipswich (west of Brisbane) campus of the University of Southern Queensland Professor Bimal K Sinha of University of Maryland, USA and Professor Suhail A R Doi of Australian National University, Australia will cover various aspects of meta-analysis including different estimation methods of the common effect size and related issues (


Shahjahan Khan

School of Agricultural, Computational and Environmental Sciences

Centre for Health Science Research

Faculty of Health, Engineering and Sciences

University of Southern Queensland, Toowoomba


Suhail A R Doi

Research School of Population Health

Australian National University, Canberra


Leave a Comment


Get the latest posts delivered to your mailbox:

Show Buttons
Share On Facebook
Share On Twitter
Share On Google Plus
Share On Linkdin
Contact us
Hide Buttons