Article Text

Download PDFPDF

Why reports of clinical trials should include updated meta-analyses
  1. Carl Heneghan,
  2. Jeffrey K Aronson
  1. Centre for Evidence-Based Medicine, Nuffield Department of Primary Care Health Sciences, University of Oxford, Oxford, UK
  1. Correspondence to Professor Carl Heneghan, Primary Care Health Sciences, University of Oxford, Oxford OX1 2JD, UK; carl.heneghan{at}phc.ox.ac.uk

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

In the BMJ-EBM Verdict series, Plüddemann and Onakpoya analysed the effects of endometrial scratching before in vitro fertilisation.1 Their verdict was that endometrial scratching does not result in higher live birth rates in women having a first embryo transfer, nor in those with previously failed transfers.

They reached their verdict by adding the results of a recent randomised controlled trial (RCT)2 to those of two previous systematic reviews.3 4 They meta-analysed the primary outcome and were therefore more confident in their conclusions than if they had analysed the results of the RCT alone. Onakpoya and Aronson did likewise when they updated their previous meta-analysis in the light of a new large trial and concluded that lorcaserin, marketed for the treatment of obesity, conferred minimal benefits.5

In neither case was a new systematic review performed, with all that entails. Instead the latest results on primary outcomes were added to existing meta-analyses, providing up-to-date information and paving the way for new systematic reviews, if necessary.

The value of systematic reviews

Systematic reviews are vital for informing research plans6 and for ensuring that treatment decisions are informed by evidence on treatment effects and costs, as in guideline development.7 In summarising the evidence from a large number of studies, systematic reviews and meta-analyses increase precision around effect estimates, producing more robust conclusions.

Even so, systematic reviews with meta-analyses often have insufficient power to detect, confirm, or refute the hoped-for effects of potentially useful interventions, and therefore require regular updating.8 However, systematic reviews are laborious to produce and are often not updated. One response to this problem has been to suggest ‘rapid’ reviews,9 better called restricted reviews.10 Restricting the methods used reduces the resources needed and expedites the review; for example, after one reviewer selects studies and extracts data, a second reviewer needs only to verify a random sample rather than all the data.11 Given the amount of new evidence produced, decision-makers may be willing to accept some biases in such reviews in return for up-to-date results incorporating the latest trial evidence,12 perhaps while waiting for a later, more robust analysis.

Although publication of a clinical trial offers an opportunity to update existing evidence, by providing a new systematic review,13 this is rarely done. An assessment of RCTs in five medical journals showed that while nearly half of the trials referenced systematic reviews, in none was there an attempt to update the results with pre-existing evidence.14 An analysis of new RCTs published over four decades showed that under 25% of the preceding trials were accounted for in the trial publication.15 And in RCTs associated with more than five potentially citable studies, nearly half cited none or only one.

Instead of a full new systematic review, we suggest asking trial authors to provide an updated meta-analysis of their primary outcome using the meta-analysis from the most recent relevant systematic review.

Problems

We recognise potential problems in updating previous meta-analyses. For example, as evidence accumulates, multiple systematic reviews are published, addressing the same question. Such reviews may come to opposing conclusions, according to the design of the review, including inclusion and exclusion criteria, the way data are extracted and the choice of statistical model. As an example, a systematic review of treatment for heart failure guided by serum concentrations of B-type natriuretic peptide reported a benefit of the intervention compared with clinical assessment alone.16 Before this review, 11 systematic reviews on the same question had given conflicting results. The most comprehensive meta-analysis included 3660 participants,17 but with the publication of the GUIDE-IT trial,18 the amount of evidence increased by over 25%, necessitating an update. This type of scenario may pose a problem: if previous meta-analyses have been conflicting, which should be updated? A solution would be to update those whose selection criteria most closely matched those in the new study.

Striving for perfection

In contrast to the resources needed to produce new systematic reviews, there are no barriers to publishing RCTs with updated meta-analyses of primary outcomes, based on pre-existing review evidence. However, it seems to be rarely if ever done. The resources required to update a pre-existing meta-analysis in conjunction with a trial report are relatively straightforward compared with those required to do the trial, or a completely new systematic review.

The Consolidated Standards of Reporting Trials (CONSORT, checklist item 22) requires an ‘interpretation consistent with results, balancing benefits and harms, and considering other relevant evidence in the discussion.’19 In an accompanying document,20 the CONSORT authors explained that ‘this can best be achieved by including a formal systematic review in the results or discussion section of the report.’

Asking authors to do this is a counsel of perfection. However, for many years calls for updated systematic reviews have been largely ignored.14 We should therefore recall Voltaire’s dictum, ‘le mieux est l’ennemi du bien.’ If best practice is not being followed, despite repeated exhortations, we ought perhaps to consider settling, at least initially, for good practice.

Conclusions

Updating a meta-analysis in this way can have one of four outcomes:

  • No change in the association, mandating continuing advice as before.

  • Strengthening of the association, mandating strengthened advice.

  • Weakening or even loss of the association, mandating weakening of the advice or even advice to abandon the intervention.

  • Unacceptably increased heterogeneity, increasing uncertainty.

If heterogeneity increases when new data are added, one should not rely on the analysis or offer advice based on the results. Early warning of this would be helpful. Furthermore, if the source of the heterogeneity was identifiable, it could provide valuable information about the relation between the intervention and the outcome.

Encouraging authors to supplement meta-analyses with their own new data, providing prompt updates, would lead to better informed policy, allow clinicians to update their practice as soon as new evidence emerges and ultimately benefit patients, whose management would be based on the latest updated evidence. Then perhaps there would be more of a stimulus to pursue perfection in full systematic reviews.

Acknowledgments

We thank Iain Chalmers and Sally Hopewell for helpful comments on this editorial.

References

Footnotes

  • Contributors CH conceived the idea for the editorial and wrote the first draft; JKA added further discussion, and the two authors contributed equally to the final manuscript.

  • Funding The authors have not declared a specific grant for this research from any funding agency in the public, commercial or not-for-profit sectors.

  • Competing interests CH holds grant funding from the NIHR School of Primary Care Research Evidence Synthesis Working Group (project 390) and the NIHR Oxford BRC. He is editor in chief of BMJ Evidence-Based Medicine and an NIHR senior investigator. He is director of the Centre for Evidence-Based Medicine (CEBM), which jointly runs the EvidenceLive Conference with the BMJ and the Overdiagnosis Conference with some international partners, which are based on a non-profit model. JKA has written and edited articles and textbooks on adverse drug reactions, including Meyler’s Side Effects of Drugs (16th edition, 2016), its companion volumes the Side Effects of Drugs Annuals, and Stephens’ Detection and Evaluation of Adverse Drug Reactions (6th edition, 2011). He is an associate editor of BMJ Evidence-Based Medicine and a member of the CEBM (see above).

  • Patient consent for publication Not required.

  • Provenance and peer review Not commissioned; internally peer reviewed.

Linked Articles