Article Text

Download PDFPDF

How can we optimise learning from trials in child and adolescent mental health?
Free
  1. Nick Axford1,
  2. Vashti Berry2,
  3. Jenny Lloyd2,
  4. Katrina Wyatt2
  1. 1 University of Plymouth, Plymouth, UK
  2. 2 University of Exeter, Exeter, UK
  1. Correspondence to Dr Nick Axford, University of Plymouth, Plymouth, UK; nick.axford{at}plymouth.ac.uk

Abstract

Improving child and adolescent mental health requires the careful development and rigorous testing of interventions and delivery methods. This includes universal school-based mindfulness training, evaluated in the My Resilience in Adolescence (MYRIAD) trial reported in this special edition. While discovering effective interventions through randomised controlled trials is our ultimate aim, null or negative results can and should play an important role in progressing our understanding of what works. Unfortunately, alongside publication bias there can be a tendency to ignore, spin or unfairly undermine disappointing findings. This creates research waste that can increase risk and reduce benefits for future service users. We advocate several practices to help optimise learning from all trials, whatever the results: stronger intervention design reduces the likelihood of foreseeable null or negative results; an evidence-informed conceptual map of the subject area assists with understanding how results contribute to the knowledge base; mixed methods trial designs aid explanation of outcome results; various open science practices support the dispassionate analysis of data and transparent reporting of trial findings; and preparation for null or negative results helps to temper stakeholder expectations and increase understanding of why we conduct trials in the first place. To embed these practices, research funders must be willing to pay for pilot studies and ‘thicker’ trials, and publishers should judge trials according to their conduct and not their outcome. MYRIAD is an exemplar of how to design, conduct and report a trial to optimise learning, with important implications for practice.

  • Child & adolescent psychiatry

Statistics from Altmetric.com

Request Permissions

If you wish to reuse any or all of this article please use the link below which will take you to the Copyright Clearance Center’s RightsLink service. You will be able to get a quick price and instant permission to reuse the content in many different ways.

Widespread concern about child and adolescent mental health, especially following the COVID-19 pandemic,1 has fuelled calls to develop interventions to promote well-being and reduce the risk of mental illness. A plausible idea to support such endeavours is universal school-based mindfulness training (SBMT).2 3 This is designed to promote young people’s skills in attention and social-emotional-behavioural regulation, both of which are known to underpin mental well-being. But establishing whether SBMT works calls for rigorous testing, hence the My Resilience in Adolescence (MYRIAD) trial reported in this special edition.

Randomised controlled trials are widely regarded as the gold standard for testing intervention effectiveness, and as the compelling story of the Oxford AstraZeneca vaccine demonstrates they can be game-changers when married with translational science.4 Moving from viruses to child and adolescent psychosocial outcomes, it is largely thanks to trials that we have a growing body of knowledge about ‘what works’ to prevent problems such as bullying, crime, maltreatment, substance misuse, and—pertinent to this special edition—anxiety and depression.5

While studies discovering effective interventions are obviously desirable, trials showing null or negative results can play an important role in supporting progress. Indeed, a significant and possibly growing proportion of trials in our field and beyond find no and sometimes harmful effects.6–8 Several explanations for this trend have been offered: increasingly rigorous trial conduct and reporting to comply with industry guidelines and journal policies; the ‘rising tide’ phenomenon whereby services as usual—the normal control condition—are improving9; and the growing number of replication trials in new contexts, often without intervention developer involvement.10

Unhelpful responses

Despite this, widely used guidance and standards in the field for developing and evaluating interventions give relatively little consideration to preparing for and responding to null or negative trial results. For example, the UK Medical Research Council guidance suggests—not unhelpfully—that successful feasibility and pilot testing is followed by a definitive trial, with considerations for scale-up discussed from the outset.11 We think it is important, however, to acknowledge from the outset that trials might produce null or negative results. Otherwise, disappointing findings can lead to research waste, which increases risk and reduces benefits for service users.12

Aside from simply not publishing null or negative trial results (the ‘file drawer problem’), other well-known responses are embarking on fishing trips to find ad hoc subgroup effects or cherry-picking positive results and giving them undue prominence (notably in abstracts).13 Further practices include emphasising methodological flaws, so casting doubt on trial results, and focusing on poor implementation—the implication being that the intervention as designed was not tested.13 It is also not uncommon to see delayed or ‘sleeper’ effects forecast, even if this possibility might reasonably have been predicted a priori.13

The legitimacy of some responses to null or negative trial results depends on the context, and some might be seen as rational acts given a complex set of incentives and constraints: we do not think that investigators set out to be underhand.13 Nonetheless, such responses can limit learning. Most obviously, unpublished null or negative effect studies, or selective reporting in published studies, can lead to evidence of effectiveness being exaggerated in systematic reviews or meta-analyses.14 15 In turn, ineffective practice may be scaled, or at least continued, consuming scarce resources and taking the place of potentially more effective alternatives. Unhelpful responses also mean that we potentially fail to learn the more nuanced lessons about what works for whom and why.

Optimising learning

Improving child and adolescent mental health demands that we test the effectiveness of interventions such as SBMT and report findings to optimise learning, whatever the results. How might this be achieved? We think several practices would support this endeavour.13 They are not exhaustive—other steps could also enhance the usefulness of trials for practice.16 Nor are they particularly novel. But they are easily overlooked.

First, an intervention should only proceed to definitive trial if it is underpinned by a sound theory of change and has been developed—where possible—with the involvement of people with lived experience of the issue being targeted (ie, ‘co-produced’). Possible unintended adverse effects should also be considered upfront and intervention design adjusted accordingly.17 Stronger intervention design reduces the likelihood of null or negative effects being traced back to issues that could easily have been foreseen.18

Second, it pays to have an evidence-informed conceptual model of the area of study that summarises the evidence and provides a framework for future research. This should cover knowledge about outcomes, mediators, moderators and implementation factors in relation to the type of intervention. As well as ensuring that the trial in question addresses areas of known uncertainty, this should inform measures and analysis, and ultimately make it easier to consider how results—whatever their hue—contribute to the knowledge base.

Third, trials should be designed to optimise learning. This may sound obvious, but it is not a given. It includes powering the study adequately, capturing implementation fidelity, recording the services received by control arm participants, and aligning, as much as possible, follow-up data collection points with when outcomes are expected to be observed. Mediator and moderator analyses help with exploring what works for whom and why, while complier average causal effect analysis unpacks the relationship between fidelity and outcomes.19 20 Qualitative research in trials can help with explaining variation in outcomes, the mechanisms through which interventions have (or fail to have) impact, and why results might be disappointing, surprising or confusing.21 22 Together, these approaches provide a richer picture of events, making trial results more informative.

Fourth, we need open and honest reporting of trial results, especially if results are equivocal or disappointing. This is more likely if trials are registered and protocols published beforehand.23 24 Then, when it comes to revealing results within the research team, process evaluation results should be shared first, allowing time to discuss implementation fidelity and hypothesise why the intervention may or may not have worked and for whom. Only then should outcome results be shared, ideally—initially—without identifying the trial arms. We think that doing things in this order encourages less biased and more dispassionate reflection on the findings.

Lastly, but by no means least, it is worth preparing in advance for different possible trial results. Thus, key stakeholders—notably investigators, funders, developers and purveyors—need to agree beforehand why the trial is being conducted (with an emphasis on equipoise), and it is essential to manage expectations—specifically, the possibility of null or negative results, how they might be communicated and how this might impact on policy or practice. The aim should be to counter the erroneous belief—understandably held often by those with the most at stake (such as the intervention developers)—that the trial will undoubtedly prove the intervention to be effective and thereby give it a ticket to scale. Working with developers and practitioners to agree aspects of trial design, notably outcome constructs and measures, guards against the temptation to criticise or regret them post hoc once results are known.

A team effort

We contend that, collectively, these steps will help to ensure that trial results are transparent and trustworthy, so minimising uncertainty or confusion about what they mean, and critically that they are disseminated to all stakeholders, warts and all. They mean that the intervention will be thoroughly developed before the trial begins, reducing the possibility that poor outcomes are attributed to poor intervention design. They also mean that any issues with its implementation—their nature, causes and possible solutions—are uncovered and adequately explored. Furthermore, our suggestions plausibly increase investigators’ capability to explain why an intervention did not produce the expected outcomes, or crucially what works for whom and in what context, and the likelihood of the findings making a substantial contribution to the extant evidence base. These are always important, and arguably more so when outcomes are not as one would have hoped.

Of course, the behaviour of investigators and key intervention stakeholders—the audience for most of our recommendations—is shaped by multiple incentives and constraints in their environment.13 This has implications for other actors. Evaluation funders, for example, need to be willing to pay for pilot studies and ‘thicker’ trials that incorporate robust process evaluation and analyses of mediators, moderators and fidelity by outcome interaction effects. Publishers—supported by journal editors and editorial boards—need to make it easier to publish null and negative trial results, for instance via results-free peer review or accepting results papers ‘in principle’ on acceptance of a protocol article.

Conclusion

We welcome the MYRIAD trial results being shared so frankly and openly with an academic audience in this special edition. It is good to see indepth discussion of the results, notably how they add to what is already known about SBMT while also highlighting areas of continued uncertainty that warrant further investigation. In our view, it epitomises how trial results should be shared to optimise learning, and how trials should be designed and conducted to enable this to happen (including much of what we advocate earlier).

To avoid research waste from such a rigorous trial it will be necessary to explore the implications of these findings with school staff and support them to make practice decisions that benefit students. Meanwhile, we look forward to a time when there will be more mixed methods trials of genuine innovations to support child and adolescent mental health and address inequalities and fewer trials that yield uninformative null or negative effects.

Ethics statements

Patient consent for publication

Acknowledgments

We are grateful to Willem Kuyken for helpful comments on a draft of this article.

References

Footnotes

  • Twitter @nick_axford

  • Contributors NA, VB, JL and KW all made substantive intellectual contributions to the content of the manuscript and approved the final version.

  • Funding The time of NA and VB is supported by the National Institute for Health and Care Research Applied Research Collaboration South West Peninsula. The views expressed in this publication are those of the authors and not necessarily those of the National Institute for Health and Care Research or the Department of Health and Social Care.

  • Competing interests None declared.

  • Provenance and peer review Not commissioned; externally peer reviewed.