Article Text

Download PDFPDF

Reporting guidance considerations from a statistical perspective: overview of tools to enhance the rigour of reporting of randomised trials and systematic reviews
Free
  1. Brian Hutton1,2,
  2. Dianna Wolfe1,
  3. David Moher1,2,
  4. Larissa Shamseer1,2
  1. 1Ottawa Hospital Research Institute, Ottawa, Ontario, Canada;
  2. 2School of Epidemiology, Public Health and Preventive Medicine, Ottawa University, Ottawa, Ontario, Canada
  1. Correspondence to Dr Brian Hutton, Ottawa Hospital Research Institute Ottawa, Ontario K1H 8L6, Canada; bhutton{at}ohri.ca

Abstract

Objective Research waste has received considerable attention from the biomedical community. One noteworthy contributor is incomplete reporting in research publications. When detailing statistical methods and results, ensuring analytic methods and findings are completely documented improves transparency. For publications describing randomised trials and systematic reviews, guidelines have been developed to facilitate complete reporting. This overview summarises aspects of statistical reporting in trials and systematic reviews of health interventions.

Methods A narrative approach to summarise features regarding statistical methods and findings from reporting guidelines for trials and reviews was taken. We aim to enhance familiarity of statistical details that should be reported in biomedical research among statisticians and their collaborators.

Results We summarise statistical reporting considerations for trials and systematic reviews from guidance documents including the Consolidated Standards of Reporting Trials (CONSORT) Statement for reporting of trials, the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement for trial protocols, the Statistical Analyses and Methods in the Published Literature (SAMPL) Guidelines for statistical reporting principles, the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement for systematic reviews and PRISMA for Protocols (PRISMA-P). Considerations regarding sharing of study data and statistical code are also addressed.

Conclusions Reporting guidelines provide researchers with minimum criteria for reporting. If followed, they can enhance research transparency and contribute improve quality of biomedical publications. Authors should employ these tools for planning and reporting of their research.

  • STATISTICS & RESEARCH METHODS

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Introduction

It has been established that there is a significant amount of preventable waste in biomedical research.1 This is important given the crucial role that biomedical research has in informing patient care and policies and decisions around our population's health. Waste due to incomplete, unusable and inaccessible research is a major concern and one has several available solutions.2 ,3

Optimal principles on which to base the preparation of biomedical research publications are those of completeness and transparency, an underlying rationale being that of the potential for usability and reproducibility by other researchers. In a recent editorial, Goodman et al address the issue of inconsistent views on what the term ‘reproducibility’ means.4 In short, it refers to the transparency and reliability of research, and whether published methods can be repeated to yield the same results and conclusions as originally reported. While there has long existed an established ordering of research designs in clinical research (commonly dubbed the ‘evidence hierarchy’), there remains a vital need for better reporting to enable readers to accurately grasp the rigours of a given research study beyond its labelled design to inform determination of faith in the findings it generated.

Unfortunately, accumulating evidence shows that the research community often fails to meet standards for transparent reporting. Empirical explorations into the reporting of randomised controlled trials (RCTs), systematic reviews and meta-analyses, as well as other study designs have demonstrated these deficiencies. For example, evaluations of trial reporting show that key details such as outcome definitions, sample size calculation, allocation concealment and sequence generation are incompletely reported in more than half of published trials.5 In a 2014 sample of systematic reviews, more than half did not identify a primary outcome.6 Mental health researchers have found that the challenges of inadequate reporting in the context of randomised trials and systematic reviews also exist in this domain. Regarding RCTs, Thornley and Adams7 demonstrated inadequate reporting of blinding and allocation concealment in a total of more than 2000 trials of interventions for schizophrenia. Patel et al8 performed a systematic review that assessed the completeness of phase 2/3 studies of antipsychotic agents, and found several reporting limitations related to the description of design, indication of hypotheses, documentation of sample size calculation, and description of randomisation and blinding. de Vries et al9 performed a meta-analysis of more than 100 trials of second-generation antidepressant drugs for management of major depressive and anxiety disorders, and demonstrated that a majority of included trials provided little to no information on the incidence of serious adverse events. Melander et al10 found that there was considerable evidence of selective outcome reporting and selective publication based on inspection of 42 placebo-controlled trials of selective serotonin reuptake inhibitors provided to the Swedish drug regulatory authority, and suggested that any efforts to recommend a ‘best’ therapy in practice based on only public data are seemingly limited by biased data. Regarding the completeness of reporting of systematic reviews, Spineli et al11 conducted a systematic review of Cochrane reviews to evaluate the extent to which they indicated methods to address missing study data and acknowledged their impact on the review, and found that in both cases there remains a need for improvement in transparency. There is an urgent need to address the challenge of poor reporting of many forms.

Of particular importance, and focused on in the current overview, is the reporting of statistical considerations in research. Writing descriptions of statistical methods and quantitative findings may be challenging for authors, particularly those with limited training in statistical methods, and given the word limitations of journal articles. In an era where considerable attention has been placed on the needs for completeness and transparency in research, it is important for researchers to ensure they are doing their part to adhere to these principles when writing up their research. Thankfully, there are tools available to facilitate this, in the form of reporting guidelines.

Over the past 20 years, we have seen the development of reporting guidance for biomedical research; the EQUATOR (Enhancing the QUAlity and Transparency Of health Research; http://www.equator-network.org) Network library currently lists more than 350 reporting guidelines for biomedical research. Reporting guidelines are typically developed through a consensus-based and evidence-based process, and often include a checklist of minimum reporting recommendations for a given study design.12 Some reporting guidelines include template diagrams to help describe the flow of participants through the study process.12

Given the frequency with which RCTs and systematic reviews are carried out by biomedical researchers and their importance in informing healthcare and decisions, reporting guidelines of particular importance are the CONSORT (Consolidated Standards of Reporting Trials) Statement for parallel group RCTs13 ,14 and the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) Statement for systematic reviews.15 Evidence is accumulating that shows that the endorsement and use of reporting guidelines in the editorial process, peer review and by authors are associated with more completely reported research.6 ,16–21 This article provides an overview of the most widely used reporting guidelines for reporting of protocols and completed reports of randomised trials and systematic reviews/meta-analyses, focusing primarily on the statistical considerations they address.

Methods

In the current overview, we begin with discussion of sources of core guidance for the reporting and planning of RCTs, and next address those of key relevance for systematic reviews and meta-analyses. To conclude, we discuss considerations regarding two topics of ongoing debate of relevancy to statistical analysts and the discussion of reproducibility of research, namely those of open data sharing and provision of statistical code.

Results

Randomised trials

RCTs represent the gold standard source of primary data for the evaluation of healthcare interventions;22 they are vital to the continued evolution of medical interventions for patients. However, the design, analysis and reporting of RCTs are complex, and require careful planning and collaboration among clinicians, methodologists and statisticians alike. As additional focus has been placed on the importance of study protocol development, the Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) Statement23 has also been developed. Also based on a consensus framework, SPIRIT provides readers with core components to be described for the development of clinical trial protocols and aligns well with guidance from CONSORT. Core statistical considerations addressed by these tools are discussed next.

Reporting completed trials: the CONSORT Statement

The CONSORT Statement for parallel group trials was originally published in 199624 (with updates in 201013 ,14), its content established through a consensus process that included meetings and a modified Delphi process.

Table 1 presents a summary of devoted items from the CONSORT Statement which addresses statistical considerations (we also note those directly addressed by the SPIRIT Statement for study protocols); details of what authors should aim to report are described within the table text. While researchers new to the use of reporting guidance may at first find the time required for preparation of study protocols and reports to be extended, using these tools will ultimately enhance the contributions of their research to the literature.

Table 1

Core CONSORT and SPIRIT elements addressing statistical considerations for trials

CONSORT addresses a variety of statistical considerations of importance for consumers of research, and ensuring the completeness of reports on these details will enhance the ability of readers to assess study rigour and establish their trust in its findings; through a consensus process, items were carefully identified with this consideration in mind. For example, considering items from table 1, clear description of sample size estimation provides readers the opportunity to assess the assumptions made by the research team (including, eg, hypothesised group event rates or means informing calculations) and to consider the degree of statistical power to test hypotheses specified a priori while also affording researchers themselves the chance to assess study feasibility and increase the likelihood of performing an appropriately powered study. Regarding specification of endpoints assessed and underlying methods used (including tests used, modelling methods and the proper inclusion of patients), description of these details provides informed readers with a clear grasp of the analytic approach to assess the collected data. This represents both a vital step towards reproducibility of findings as well as a core consideration that will inform readers when establishing their level of trust in study findings. Clear description of a priori and post hoc secondary data explorations such as subgroup analysis can appropriately frame the planning and thought process of study authors for readers, alleviating concerns in establishing the extent of data fishing which may have been undertaken during data analysis. Finally, while table 1 highlights additional elements, which are much lesser in terms of statistical complexity, inclusion of basic details such as patient flow, specification of trial design and randomisation mechanism, and compilation of harms may often fall to a study statistician to perform. As there remains room for improvement in author reporting on near all elements of CONSORT, even these fundamental details warrant mention.

Provision of details related to the statistical aspects of trials mentioned in table 1 by an increasing number of researchers will continue to benefit the quality of randomised trials in the biomedical literature as a whole.

Reporting trial protocols: the SPIRIT Statement

Past research has shown that the completeness of protocols for trials is also sometimes insufficient, including deficiencies in statistical aspects such as sample size estimation and reporting of findings for all a priori end points.25 ,26 This is unfortunate, as protocols provide the basis for rigorous planning and performance of trials, and provide a public record of the study's methodological plan.

Also based on a consensus framework, SPIRIT provides readers with core components to be described for the development of clinical trial protocols. In essence, while CONSORT provides instruction to authors at the time of study reporting, SPIRIT provides guidance at the important stage of study planning, prior to initiation. Thus, in addition to improving completeness of protocols, it may also help researchers better consider all key details during study design. Intuitively, much of its content reinforces recommendations of CONSORT, as can be seen in table 1.

We refer readers to the full checklists and websites for CONSORT (http://www.consort-statement.org) and SPIRIT (http://www.spirit-statement.org) as well as the key guidance publications for review of the broader list of items to be considered to maximise transparency of clinical trial protocols; the related Explanations and Elaborations reports for both documents also provide examples of suitable reporting to inform readers, as well as educational content regarding terminology and core concepts.14 ,27

CONSORT Statement extensions

In addition to core guidance for parallel group trials, panels of experts have developed extensions for different trial designs with novel considerations in terms of design and analysis. These include extensions for non-inferiority/equivalence trials,28 cluster-randomised trials,29 pilot trials,30 N-of-1 studies,31 pragmatic trials32 and the reporting of harms.33 All of these guidance documents address aspects of statistical relevance for researchers working on the description of methods and findings from RCTs, and draw attention for non-statisticians to additional analytic topics of which they should be aware during the study design, analysis and reporting phases. From a statistical perspective, reports of cluster-randomised studies must provide additional clarity regarding at what level (cluster vs patient) study hypotheses will be assessed; considerations related to cluster size, number of clusters and intracluster correlation with regard to sample size estimation; how clustering was accounted for in data analyses; and other such nuances. Description of non-inferiority and equivalence trials requires additional attention to statistical details including specification of the chosen non-inferiority margin, with rationale.28 The extension for feasibility studies provides mild updates of CONSORT items while placing focus on considerations for moving towards a definitive trial.30 The extension for N-of-1 studies addresses the need for several additional statistical considerations including the accounting for carry-over/period effects in analyses, reporting the allocation and sequence of treatment periods, and a host of additional details related to the summary of findings from analysis.31 For these and further details, we refer interested readers to the CONSORT website to locate the corresponding publications.

Reporting of statistical findings in primary research: the SAMPL Guideline

While many reporting guidelines exist, few provide specific instructions on how to report data. Rather, they tend to provide general recommendations for which type of data to report. An important topic is how such data is ‘best’ provided to readers. From the perspectives of transparency and in consideration of the needs of researchers (including systematic reviewers) who may have specific interests for key data, there is a need to provide sufficient information. For instance, incomplete descriptions of the numbers of events, SDs and measures of precision and other information can limit the usability of published research.

The SAMPL (Statistical Analyses and Methods in the Published Literature) Guidelines, developed by Altman and Lang, provide recommendations for reporting statistical methods and findings in biomedical journals.34 Their intent is to be broad reaching, being relevant for all primary study designs. Reporting recommendations include how to report: basic numbers and descriptive statistics; risks, rates and ratios; hypothesis testing; analyses of association and correlation; regression, analysis of variance and analysis of covariance; analysis of time-to-event end points; and Bayesian analyses. In discussion of these different types of analyses, Altman and Lang address the specific data that authors should aim to provide. The specific guidance is detailed and beyond the scope of this summary, and we encourage readers to review this guidance and to reinforce its content to their research teams' statisticians during the planning, analysis and reporting phases of their research.

Systematic reviews

Systematic reviews have become increasingly prevalent in the biomedical literature, with at least 8000 published annually as of 2014.6 Intuitively, as many are focused on the synthesis of interventions evaluated in RCTs, systematic reviews are a highly valued, gold standard component in the practice of evidence-based medicine. They are an important conduit of knowledge used to inform clinical decision-making and the development of clinical guidelines. Thus, rigour in their design, conduct and reporting is critical. In addition to ‘traditional’ systematic reviews of aggregated patient data to compare two medical interventions, during the past 20 years, there has been an evolution of several more complex forms of syntheses including that of individual patient data (IPD meta-analysis),35 and an analytic framework for the comparison of multiple interventions (network meta-analysis; NMA36). The complexity of statistical considerations for meta-analyses continues to rise. Guidance for the reporting of systematic reviews of several forms, again focusing on statistical aspects, is discussed next.

Reporting completed systematic reviews: the PRISMA Statement

Preceded by the QUOROM (Quality Of Reporting Of Meta-analyses) Statement,37 the PRISMA Statement was developed via face-to-face meetings and a consensus process that involved 29 methodologists, authors, physicians and journal editors. Past research has documented the positive impact of this reporting guidance on the completeness of reported systematic reviews.6 ,17 Its 27 checklist items address all aspects of reviews, maximising its efforts to ensure authors adequately cover background and rationale, study methods for data gathering and data analysis, and subsequently presentation of results and documentation of primary findings. Table 2 summarises the important items to be addressed from the perspective of statistical methods for systematic reviews and meta-analyses.

Table 2

Core PRISMA and PRISMA-P elements addressing statistical considerations for systematic reviews

Similar to CONSORT for reporting of trials, added time to complete drafting of manuscripts which researchers encounter early during their adoption of PRISMA will improve the contributions of their research to the literature.

PRISMA content includes several statistical considerations of relevance to readers which directly impact their ability to accurately assess the rigour of systematic reviews and their corresponding degree of faith in findings. A priori specification of primary considerations of the research team in the decision to synthesise data (in relation to both variation between clinical study features as well as benchmarks of statistical variability of effect sizes) provides clear documentation of influential factors regarding this important judgement. Clear description of the model used for analysis, the weighting scheme for studies and the chosen measures of treatment effect and uncertainty provide readers with a simple and clear description clarifying assumptions about study differences and the details for summary estimates that will be provided. Similar to reporting of trials, clear specification of a priori and post hoc secondary analyses again can minimise concerns regarding potential data dredging. Finally, clear presentation of study-level data in addition to summary data from meta-analyses allows readers to assess differences in study data more closely in terms of variations in observed effect sizes, event rates within intervention groups and other such factors of importance. Optimal reporting of the details outlined in table 2 by authors will continue to benefit the quality of reviews in the biomedical literature as a whole.

Reporting systematic review protocols: the PRISMA-P Statement

Only 16% of systematic reviews report having a publicly accessible protocol; these are dominated by reviews carried out within Cochrane.6 The importance of documentation of and access to systematic review protocols has grown during the past decade based on increased interest in several notions including: (1) a need for increased focus on developing plans reviews to avoid arbitrary decision-making, as well as anticipating potential obstacles during review conduct; (2) consideration of reproducibility of research by others; (3) avoiding unintentional duplication of systematic reviews and (4) the ability to identify biases related to protocol deviations and selective reporting.38 As with primary research, prospective registration of systematic reviews can facilitate greater transparency of the review process. The PROSPERO prospective register for systematic review protocols, housed with the University of York, was developed in 2011 (http://www.crd.york.ac.uk/PROSPERO/). In addition to registration, documentation of detailed methods and analytical plans for systematic reviews is important. Similar in intent to SPIRIT for trial protocols, the PRISMA for Protocols (PRISMA-P) Statement39 was published in 2015. PRISMA-P is intended for reviews of therapeutic efficacy of medical interventions, and includes a checklist of 17 items that align closely with content from the PRISMA Statement (see table 2) to enable seamless integration.

We refer readers to the full checklists and websites for PRISMA and PRISMA-P (http://www.prisma-statement.org), as well as the key guidance publications for review of the broader list of items to be considered to maximise transparency of clinical trial protocols; the related Explanations and Elaborations reports for both tools also provide examples of strong reporting, and additionally include educational content.38 ,40

PRISMA Statement extensions

In addition to core guidance for systematic reviews comparing the therapeutic efficacy of pairs of medical interventions using aggregate patient data, extensions of PRISMA have been developed for reviews of different structures and purpose. An extension for NMA41 was developed to help researchers address reporting issues for nuances of such analyses, including presentation of the expanded evidence base of interventions using network diagrams and statistical evaluation of the agreement between sources of direct and indirect evidence (commonly called consistency or coherence). Certain modifications of other core PRISMA items were also made in relation to how to summarise findings from analysis, consideration of Bayesian applications and strategic use of supplements to present all information of relevance to readers. For readers unfamiliar with NMA, the Explanations and Elaborations document also provides educational content on key methodological concepts and illustrative examples of good reporting.41 Recent research42 has put forward a structured set of suggestions for extensions of the standard protocol structure for traditional systematic reviews addressed by PRISMA-P guidance.38

The PRISMA Extension Statement for Individual Patient Data (IPD) Meta-Analysis added three additional reporting items for consideration by authors, and also incorporated wording changes for 23 checklist items.43 Newly added items included specification of methods followed to confirm the integrity of the IPD (including baseline balance and other factors), reporting of any vital findings from these assessments and exploration for differences in benefits and harms in different types of patients (including evaluation of relevant interaction terms and other potential effect modifiers). Modifications addressed changes necessary for items that included provision of additional details of the approach to analysis necessary for IPD meta-analysis (including accounting for clustering of patients, methods to synthesise aggregate and patient-level data together, and other details with a focus towards complete transparency of methods for analysis).

Few additional nuances from a statistical perspective are noted by the PRISMA Extension Statement for Harms,44 though of note is the need for authors to clearly stipulate the handling of zero cells in meta-analyses as these are not uncommon when reviewing data for end points associated with lower event rates; ensuring clarity regarding criteria employed for supplemental meta-analyses related to harms grades, varying end point definitions and potentially alternative models is also important. Finally, the PRISMA Extension for equity-focused reviews45 mentions a small number additional considerations of statistical relevance related to providing clear specification of subgroup (or other) analyses performed to assess health inequities, with consideration of both relative and absolute measures of effect. Additional PRISMA extensions for reviews involving children, reviews of diagnostic accuracy, scoping reviews and rapid reviews are forthcoming.

Additional topics of relevance

Sharing of study data and statistical code

Regarding reproducibility of biomedical research, an additional consideration for statisticians and investigators involved in the conduct of trials is the sharing of clinical trial data. In 2016, the International Committee of Medical Journal Editors (ICMJE)46 ,47 expressed their position on ethical requirements for trialists to openly share data derived from randomised trials, a perspective shared by an increasing number of funders and other organisations. As per Goodman, the reproducibility of research requires that other researchers may replicate original findings using the same materials evaluated by the original study authors, which naturally requires access to full study data. While a growing number of journals continue to support the sharing of clinical trial data, research suggests that journals mandating this principle are not yet consistently following this policy.48 From a transparency perspective, research units involved in the design, performance and reporting of clinical trials should give strong consideration to optimising the transparency of their research via public data sharing. The same consideration should also be undertaken by researchers performing systematic reviews and meta-analyses; while traditional meta-analyses comparing two interventions commonly provide access to raw study data as a component of standard graphical output from many meta-analysis packages, this is not the case for meta-analyses of other forms. For RCTs and systematic reviews alike, open provision of study data is likely to elevate the trust and confidence of readers in study findings. While researchers' interests to openly adopt data sharing has been limited to some degree by concerns over impact on their careers, there is reason to believe positive impacts including increased public attention for their research as well as enhanced career prospects may follow.49

For RCTs and systematic reviews, formal guidance does not currently exist suggesting that researchers should provide access to the statistical/computer code used to generate study findings; however, this issue has been discussed in the literature.50–52 Based on the premise of reproducibility, the potential for identification of errors in original analyses (during peer review or postpublication) and the ability to consider additional analyses of clinical relevance, there is appeal in the provision of code for readers and reviewers. However, as Ioannidis suggests, concern may remain as to whether code provided is a complete representation of all analyses carried out or only a subset thereof, limited to the study's most interesting findings.46 As most biomedical journals are online, the use of online supplements enables authors make additional content available with their research, including relevant statistical code. Other initiatives such as the Open Science Framework (https://osf.io/) facilitate open sharing of research documents as well as online collaboration between researchers. While the difficulty of ‘spin’ in biomedical research remains a challenge, continued improvement in statistical reporting may enhance our ability to identify it more easily in our readings of evolving literature.

Discussion and conclusions

Efforts to maximise the transparency and completeness of reporting can go a long way towards reducing waste in research, within the realm of mental health publications as well as the broader realm of all biomedical publications. Statistical aspects, in terms of description of methods used and results achieved, represent an important dimension in which the clarity of reporting is critical. During the past 20 years, reporting guidelines have been developed which offer researchers an excellent resource, easy to follow and to reflect in journal submissions via appropriate citation and completion of checklists. Continued endorsement of such guidance by biomedical journals paired with increased awareness and implementation by researchers is key steps towards further enhancing the quality of reporting of clinical trials and systematic reviews. Other instrumental stakeholders who can facilitate optimal reporting include institutions and funders who can mandate adherence to guidelines and open access principles.2 ,3

While the existence of poor reporting of clinical trials and systematic reviews may not necessarily directly correlate to poor design and methodological quality, a relationship of this nature may often be presumed. Therefore, complete reporting can facilitate assessments of methods, quality and the risk of bias in published research. This overview only addresses core guidance documents related to randomised trials and systematic reviews, and corresponding research for other study designs is also available. Interested readers will find comprehensive access to existing reporting guidance tools from the website of the EQUATOR Network, and we encourage readers to explore them for use in their future research.

References

Footnotes

  • Competing interests None declared. doi:10.1136/eb-2017-102666

  • Provenance and peer review Commissioned; externally peer reviewed.