Spotlight on… Understanding and using evidence
Types of evidence
Different types of evidence may be used in developing and implementing responses. This can include:
- evaluations of interventions, such as randomised controlled trials and other experimental designs or observational studies. These can help assess the quality of the evidence and the direction of each intervention’s effect (beneficial or not);
- implementation studies, which investigate which factors have been found to be associated with effective service provision;
- syntheses of expert opinion from stakeholders. This can be used, for example, in guideline development (to complement other forms of evidence). Ideally this should include input from both those involved in the interventions delivery and its potential recipients.
- Basic science and research findings that can inform the design of new interventions.
The various types of evidence differ in their strengths and weaknesses and in the information they can provide. Drug-related problems are multifaceted and require not only medical, but also socioeconomic and educational interventions. As a result, it is often necessary to integrate evidence from a range of disciplines and types of study, using both quantitative and qualitative research methods.
In reviewing what evidence is available to inform decision-making, the first step is to define the research question, which in turn determines the most appropriate study design. For example, the effectiveness of treatment on individuals is usually best evaluated through randomised controlled trials. To determine the longer-term impact of an intervention that has already proved to be effective or the impact of broader policies or population-based interventions, observational studies are likely to be more appropriate. These include, for example, longitudinal or cohort studies, interrupted time series or controlled before-and-after studies.
It is also important to consider the quality and relevance of the available evidence. Are the findings taken from appropriately designed studies and based on well-conducted research that minimises biases? Are they reported correctly and related to the target groups of interest?
There are a number of ways of evaluating the quality of the available evidence. The best evidence comes from systematic reviews that combine the results of multiple studies and assess their quality as well as the extent to which they show consistent findings. However, in emerging fields it can take some time for sufficient primary studies to be completed and systematic reviews undertaken, meaning that services will often need to be developed in areas where the evidence base is weak or partial.
When using evidence, it is also important to recognise that the quality of the evidence is not the only consideration, as there can be interventions that have shown effective results but for which the evidence is currently weak because they have not yet been sufficiently researched. Similarly, there can be high-quality evidence of an intervention being effective, but with only a small beneficial effect. Importantly, evidence statements are not broadly applicable, but linked to specific outcomes and, usually, specific populations, settings, or both. Therefore, understanding how outcomes have been defined and measured is crucial when considering how the evidence available can be interpreted.
Evaluating the evidence used for this guide
The evidence statements in this guide are a compilation of what is known about responding to drug use. They reflect only domains where we have clear evidence to support an intervention. In many situations, the evidence to support an intervention is limited due to a lack of robust evaluation, or because the available evidence has not been synthesised in a way that facilitates an evaluation (i.e., no systematic reviews or meta-analyses of the evidence have been conducted). Lack of evidence, or low-quality evidence, does not necessarily mean that an intervention does not work. It means that the intervention has not yet been adequately evaluated so at this point in time there is a high degree of uncertainty in predicting what impact it will have.
In this guide, the evidence statements are based on evidence from systematic reviews and meta-analyses published from January 2010 to March 2021. Systematic reviews and meta-analyses were identified from PubMed searches for each topic using relevant Medical Subject Headings. From the relevant studies identified full-text papers were obtained for the relevant reviews, from which key data were extracted: publication details, the population studied, the intervention evaluated, a description of included studies (i.e., the number of trials/participants, types of study design), and quality (study design). Where available, evidence statements and their GRADE quality ratings were extracted and used (Cochrane GRADE). Where GRADE quality ratings were not available, the quality of the evidence was assessed using GRADE criteria. Evidence derived from single studies was rated as ‘very low quality or insufficient evidence’. Where there was more than one review available on a particular topic, evidence statements were based on the most recent robust evidence available and considered the consistency of evidence across reviews. Where evidence was not consistent, a judgement was made regarding the strongest evidence, based on the recency of the review, and on the number and quality of included studies. In some cases, GRADE quality ratings for reviews needed to be re-assessed to maintain consistency across reviews. Evidence from narrative reviews was generally excluded.
Because of the methods used, evidence statements are necessarily constrained to domains where adequate evidence is available to confirm (or refute) the benefits of an intervention. In some cases, good evidence may have been available to demonstrate the benefits of an intervention, but it had not been synthesised in a fashion that allowed the quality of the evidence to be judged (i.e., no systematic reviews or meta-analyses existed). In these situations, evidence regarding that intervention was not included in the evidence statements. In other situations, evidence was available from only a single study, or it was low quality (e.g., due to study design limitations). This meant that the evidence was not conclusive; the quality rating assigned to the evidence statement in these situations was very low or insufficient. In many intervention areas we do not to report on evidence where it was inconclusive or very low quality because of space limitations.
Summarising the evidence
The evidence-based rating system used in this guide has two dimensions. All evidence refers to a specific outcome measured in a specific population and/or setting and timeframe.
The first dimension reflects the direction of the intervention’s effect – that is, whether the intervention has been consistently found to produce a benefit, unclear benefit, or potential harm:
|Beneficial||Evidence of benefit in the intended direction.|
|Unclear||Unclear whether the intervention produces the intended benefit.|
|Potential harm||Evidence of potential harm, or evidence that the intervention has the opposite effect to that intended (e.g. increasing rather than decreasing drug use).|
The second dimension represents the quality of the evidence and is based on the Cochrane GRADE rating system, where the ratings reflect confidence in the quality of the evidence. This is shown through:
|High||We can have a high level of confidence in the evidence available|
|Moderate||We are reasonably confident in the evidence available|
|Low||We have limited confidence in the evidence available|
|Very low||The evidence available is currently insufficient and therefore considerable uncertainty exists as to whether it will produce the intended outcome.|
Low or very low-quality evidence will be common for new responses or interventions addressing emerging problems. It is therefore important to include an evaluation and be vigilant for possible adverse or unintended outcomes.