"...[A] meta-analysis is the use of statistical methods to combine results of individual studies...to make best use of all information gathered in a systematic review by increasing the power of the analysis. By statistically combining the results of similar studies we can improve the precision of our estimates of treatment effect, and assess whether treatment effects are similar in similar situations. The decision about whether or not the results of individual studies are similar enough to be combined in a meta-analysis is essential to the validity of the result and heterogeneity..." — Cochrane Collaboration
A meta-analysis (MA) is an objective, analytical research method that pools results from key clinical studies. The MA combines results from studies that address a common research hypothesis. It provides precise estimates of treatment effects, weights different studies as to importance, and considers validity (internal/external) based on the quality of studies included. Well-performed meta-analyses strive to account for relevant studies, heterogeneity among them and rigorous, and robust approaches to synthesis. In 1904, Karl Pearson performed the first meta-analysis because he felt that it would improve statistical power in studies with small samples. Pearson analyzed results and concluded that the new pooled research would allow for better accuracy. Although the MA is widely-used in medicine, its use did not come into effect until 1955. Sophisticated techniques for meta-analysis were introduced in educational research in the 1970s.
Five (5) steps in the meta-analysis
I. Define your hypotheses
define your research statement and hypotheses before searching for studies. Make the relationship between variables under investigation explicit to define your inclusion and exclusion criteria.
II. Search the literature
begin with mapping out your search strategy and aim to retrieve every relevant study. Use a social bibliography tool such as Zotero or Mendeley. Think about searching for grey literature, hand-searching, snowballing and accessing the invisible college (i.e., social networking with researchers re: unpublished studies, conference proceedings. Consult a health librarian.
III. Input data
gather findings from major studies (e.g., p-value, effect size, etc) and consider a statistical tool for data massage. Some studies will not provide enough statistical information to calculate effect sizes. Consult a biostatistician.
IV. Calculate effect sizes
the overall effect can be determined by converting statistics to a common metric; consider sample-size, bias, and central tendency (e.g., mean effect size and confidence intervals), variability (e.g., heterogeneity analysis). For accuracy, consult a biostatistician.
V. Analyze variables
where heterogeneity exists, analyze moderating variables by coding each and analyzing mean differences (for categorical variables) or weighted regression (for continuous variables) to determine variability in effect size. If heterogeneity is not present, analyze moderating variables.
Strengths and weaknesses of the MA
The meta-analysis has several strengths and weaknesses. Some of the benefits of performing a meta-analysis are:
Provides a view into the medical literature that no other method can
Helps health professionals synthesize hundreds of studies, and cope with information overload
Combines several studies and will therefore be less influenced by local bias than single studies
Reveals whether studies' results are more varied than anticipated
Makes it more likely you can safely generalize results to other populations
May provide higher statistical power than individual studies
A poorly-conducted meta-analysis may be biased due to the exclusion of important or missed studies
Weak and/or misleading analysis should be avoided by speaking to a biostatistician
Conducting the SR/MA is a long and expensive process requiring considerable resources and expertise
Sources of bias are hard to control in the meta-analysis. An MA of poorly-designed studies results in unreliable numbers. Slavin argues that methodologically sound studies are naturally preferable which he calls "best evidence meta-analysis". Some analysts include weaker studies, adding a study-level predictor variable to reflect the quality of studies. This examines the effect of study quality on effect size.
Another weakness is the heavy reliance on published studies, which may increase effect. However, it is easier to publish studies that show a significant effect. Publication bias or "file-drawer effect" (where non-significant studies end up in desk drawers rather than in public domain) must be considered in interpreting outcome. When publication bias is likely, some meta-analyses include a "failsafe N" statistic to calculate numbers of studies with null results that will make the treatment effect unreliable.
Reporting standards
Scientists and librarians can use various checklists for critical appraisal, such as:
Altman DG, Deeks JJ. Meta-analysis, Simpson's paradox, and the number needed to treat. BMC medical research methodology. 2002 Jan 25;2(1):3. SeeAltman's Google Scholar profile
Note: Please use your critical reading skills while reading entries. No warranties, implied or actual, are granted for any health or medical search or AI information obtained while using these pages. Check with your librarian for more contextual, accurate information.