Drug policy evaluation
Page last updated: April 2021
What is drug policy evaluation and why is it important?
Evaluation is essential for effective policymaking, helping ensure that policies and programmes have the desired effect, provide value for money and do not have negative unintended consequences. The importance of evaluation has been recognised in all EU drug strategies and in the strategies of many Member States.
To support those considering or involved in commissioning, managing or undertaking policy evaluations, this page provides access to a range of materials, including a 7-step guide, examples of strategies and evaluations in Europe and potentially useful data sources.
Timeline of EU drug strategies and plans and their evaluation
The call for a common approach to the drugs phenomenon in Europe was first made by the European Parliament in the mid 1980s. In response, heads of state and government of the 12 Member States of the European Community agreed on the first European plan in the field of drugs 1990.
Two years later, the plan was revised and a new European plan in the field of drugs 1992 was adopted by the European Committee to Combat Drugs — CELAD, a newly created intergovernmental mechanism among Member States.
However, in 1995 on the basis of the new prerogatives given by the Treaty on the European Union, the European Commission took the lead in drafting and adopting a more comprehensive European action plan to combat drugs 1995–99. This represented an important step towards the development of a European approach on drugs.
The action plan called upon the European Commission to undertake a mid-term evaluation in 2002 and …
… a final evaluation in 2004. This was the first time that such an evaluation exercise had been undertaken in the drugs field at EU level. During 2004, talks, meetings and conferences were organised by successive EU presidencies to give continuity to the European approach on drugs. At its December 2004 meeting, the European Council adopted the EU drugs strategy 2005–12, covering an 8-year period.
Two consecutive 4-year action plans were subsequently adopted, the first of which was the EU drugs action plan 2005–08.
The European Commission was tasked to draw up annual progress reviews on the implementation of the action plans, for consideration by the Council, and to conduct the final evaluation of the EU drugs action plan (2005–08).
On the basis of this evaluation, the new action plan 2009–12 was drafted.
Action 72 of the EU drugs action plan 2009–12 would request the European Commission to undertake an external, independent assessment of the implementation of the EU drugs strategy 2005–12 and its action plans.
On the basis of the external evaluation, on 7 December 2012, the Justice and Home Affairs Council of the European Union endorsed a new EU drugs strategy (2013–20) and, on 6 June 2013, a new EU action plan (2013–16).
In 2016, the Commission conducted a mid-term assessment of the strategy, which looked at the outputs of the strategy and their impact.
In 2017, the Commission adopted a Communication on the evaluation of drugs strategy and action plan on drugs 2013–2016. This informed the drafting of a new drugs action plan 2017–20.
On the basis of the external evaluation, on 24 July 2020, the European Commission adopted a Communication to the European Parliament, the Council the European Economic and Social Committee and the Committee of the Regions on an EU Agenda and Action Plan on Drugs 2021-2025.
On these bases, on 18 December 2020, the Council of the European Union endorsed a new EU drugs strategy (2021–25).
Drug strategy evaluation at the national level
This section provides a summary of the evaluation of national drug strategies in the EMCDDA’s 29 reporting countries (EU-27, plus Turkey and Norway) up to the end of 2020. More detailed information can be obtained in the 2017 EMCDDA Paper National drug strategies in Europe.
Governments use national drug strategies and action plans to elaborate their approach to illicit drug policy. The strategies generally give an outline of the overall principles and course of action being followed and implemented through programmes and projects. The trend towards the use of these documents has been developing since the mid-1990s. At that time a third of the EMCDDA’s current reporting countries had one and by the turn of the century two-thirds had adopted one. At the end of 2020, all of the 29 countries had an active strategy. While Denmark’s national drug policy is expressed in a range of strategic documents, legislation and concrete actions, all other countries have a national drug strategy document. Of these countries, 15 have a national drugs strategy that focuses only on illicit drugs, while 14 other countries have a broader document that also addresses other substances (e.g. alcohol, tobacco and medicines) and additions (e.g. gambling).
|Bulgaria||Illict drugs focus|
|Denmark||Illict drugs focus|
|Estonia||Illict drugs focus|
|Greece||Illict drugs focus|
|Croatia||Illict drugs focus|
|Italy||Illict drugs focus|
|Latvia||Illict drugs focus|
|Hungary||Illict drugs focus|
|Malta||Illict drugs focus|
|Netherlands||Illict drugs focus|
|Romania||Illict drugs focus|
|Slovenia||Illict drugs focus|
|Slovakia||Illict drugs focus|
|Finland||Illict drugs focus|
|Turkey||Illict drugs focus|
The policy cycle of undertaking an evaluation of an outgoing strategy and developing and adopting a new strategy often takes place over a long period. Consequently, it is standard practice that strategies remain in force until a new one has been finalised, even if this is after the expiry date of the outgoing document.
The evaluation of national drug strategy documents has been gaining momentum, since the first evaluations were published in 2003, and had become a standard practice among the EMCDDA reporting countries by 2010.
Evaluation helps governments in many ways, for example to track implementation progress, gauge a strategy’s continuing relevance, measure inputs and outputs and assess possible impacts. The outcomes from evaluations can be used to make adjustments in active strategies and to develop new ones.
There are many different types of evaluation, and what is most appropriate will depend on factors such as timing, the sort of information required (research questions) and the resources available (for more information see: Evaluating drug policy: a seven-step guide to support the commissioning and managing of evaluations).
The EMCDDA monitors evaluation practices though a typology focused primarily on assessments conducted within the framework of national governments’ drug strategy documents (see Table 3). This incorporates both whole-strategy and targeted evaluation, alongside ongoing monitoring and research aimed at supporting evaluation.
|Multi-criteria evaluation||A multi-criteria evaluation of a strategy and/or action plan at its mid- or end point|
|Implementation progress review||A review of the actions taken and/or the strategy’s context at its mid- or end point|
|Targetted evaluation||An evaluation or audit of a specific policy or strategy aspect or area|
|Other approaches||Assessment by means of ongoing indicator monitoring, research projects, or regional or local strategy evaluation|
|Country||Type of evaluation|
|Ireland||Implementation progress review|
|Italy||Implementation progress review|
|Poland||Implementation progress review|
|Slovakia||Implementation progress review|
|Turkey||Implementation progress review|
|Norway||Implementation progress review|
There is often no neat divide between the types of evaluation, and countries may have conducted more than one sort over time. In some countries (e.g. France), evaluations of different projects and responses have long been undertaken and have functioned as assessments of measures outlined in strategies and action plans. Consequently, the map below shows a snapshot of the situation reported by EMCDDA countries at the end of 2020.
A final drug strategy evaluation typically takes the form of either a multi-criteria evaluation or an implementation progress review at the end of the strategy’s timeframe. There were 16 multi-criteria evaluations, 6 implementation progress reviews, and 3 targeted evaluations reported as having recently taken place at the end of 2020, while four EMCDDA reporting countries used other approaches, such as a mix of indicator assessment and research projects.
Examples of national evaluations
Examples of evaluations, where publicly available, provide details about the method used and findings. These reports and overviews can be found in the EMCDDA Document Library. These documents do not, however, represent all evaluations undertaken or the approach followed over time in any country. They are a selection linked to the map for explanatory purposes.
You can access publicly available national evaluation documents below.
The EMCDDA operates a robust takedown policy — if there is a document which you believe should be removed from this list, please feel free to contact us and let us know. Similarly, while we are actively updating this list, we welcome input and feedback from visitors should there be a publicly available document or resource which you believe should be included here. We may be contacted at policyevaluation.team[a]emcdda.europa.eu — remember to replace the [a] with '@' before sending your email.
Resources for drug policy evaluation
We will be adding relevant resources and other useful material to this section over time.
Key EMCDDA resources
- Evaluating drug policy: a seven-step guide
- New developments in national drug strategies in Europe
- Statistical Bulletin
Glossary of drug policy evaluation terms
Below is a list of terms used when discussing drug policy evaluation.
Activities — processes, tools, events, technology and actions that are part of the programme implementation. ese interventions are used to bring about the intended programme changes or results, i.e. the actions taken or work performed to achieve the aims of the intervention.
Added value — the extent to which something happens as a result of an intervention or programme that would not have occurred in the absence of that intervention. Also known as ‘additionality’.
Aim — the purpose of, for example, an intervention or a policy.
Causality — an association between two characteristics that can be demonstrated to be due to cause and e ect, i.e. a change in one causes the change in the other.
Coherence — the extent to which intervention logic is non-contradictory or the extent to which the intervention does not contradict other interventions with similar objectives.
Control group — a group of participants in a study not receiving a particular intervention, used as a comparator to evaluate the effects of the intervention.
Criterion — character, property or consequence of a public intervention on the basis of which a judgement will be formulated.
Data — information; facts that can be collected and analysed in order to gain knowledge or make decisions.
Drug action plan — scheme or programme for detailed speci c actions. It may accompany or be integrated into a drug strategy but typically focuses on a relatively short period
and identi es more detailed actions to implement the strategy, along with timings and responsible parties.
Drug policy — overall philosophy on the matter; position of the government, values
and principles; attitude, direction. It encompasses the whole system of laws, regulatory measures, courses of action and funding priorities concerning (illicit) drugs put into e ect by governments.
Drug strategy — unifying theme; framework for determination, coherence and direction. It is generally a document, usually time bound, containing objectives and priorities alongside broad actions, and may identify, at a top level, the parties responsible for implementing them.
Effectiveness — the fact that expected e ects have been obtained and that objectives have been achieved.
Effciency — the extent to which the desired effects are achieved at a reasonable cost. Equity — the extent to which different e ects (both positive and negative) are distributed fairly between different groups and/or geographical areas.
Evaluation — a periodic assessment of a programme or project’s relevance, performance, e ciency and impact in relation to overall aims and stated objectives. It is a systematic tool which provides a rigorous evidence base to inform decision-making.
Evaluation criteria — aspects of the intervention which will be subject to evaluation. Criteria should t the evaluation question. If all the criteria are put together, they should account for a good and complete measurement. Examples are relevance, efficiency and effectiveness.
Evaluation question — question asked by the steering group in the terms of reference and which the evaluation team will have to answer.
Evaluation team — the people who perform the evaluation. An evaluation team selects and interprets secondary data, collects primary data, carries out analyses and produces the evaluation report. An evaluation team may be internal or external.
Evidence-based — conscientiously using current best evidence in making decisions. Evidence-informed policy — an approach to policy decisions that aims to ensure that decision-making is well informed by the best available research evidence.
Ex ante evaluation — an evaluation that is performed before implementation of an intervention. is form of evaluation helps to ensure that an intervention is as relevant and coherent as possible. Its conclusions are meant to be integrated when decisions are made. It provides the relevant authorities with a prior assessment of whether or not issues have been diagnosed correctly, whether or not the strategy and objectives proposed are relevant, whether or not there is incoherence between them or in relation to other related policies and guidelines, and whether or not the expected impacts are realistic.
Ex nunc(or interim) evaluation — an evaluation that is performed during implementation. Ex post (or final) evaluation — evaluation of an intervention after it has been completed. It strives to understand the factors of success or failure.
External evaluation — evaluation of a public intervention by people not belonging to the administration responsible for its implementation.
Feasibility — the extent to which valid, reliable and consistent data are available for collection.
Impact — fundamental intended or unintended change and direct or indirect consequences occurring in organisations, communities or systems as a result of programme activities within 7 to 10 years, i.e. long-term consequences of the intervention.
Impact (or outcome) evaluation — evaluates whether the observed changes in outcomes (or impacts) can be attributed to a particular policy or intervention, i.e. determining whether or not a causal relationship exists between an intervention or policy and changes in the outcomes.
Indicator — quantitative or qualitative factor or variable that provides a simple and reliable means to measure achievement, to help assess the performance of a policy/intervention (to reflect the changes connected to an intervention, an output accomplished, an effect obtained or a context variable — economic, social or environmental).
Input — financial, human, material, organisational and regulatory means mobilised for the implementation of an intervention.
Internal evaluation — evaluation of a public intervention by an evaluation team belonging to the administration responsible for the programme.
Joint evaluation — evaluation of a public intervention by an evaluation team composed of both internal (people belonging to the administration responsible for the programme) and external evaluators.
Maryland Scientific Methods Scale — a system that provides an overview of evaluation designs.
Method — complete plan of an evaluation team’s work. A method is an ad hoc procedure, specially constructed in a given context to answer one or more evaluative questions. Some evaluation methods are of low technical complexity, while others include the use of several tools.
Monitoring — a continuing function that uses systematic collection of data on speci ed indicators to provide management and the main stakeholders of an ongoing intervention with indications of the extent of progress, achievement of objectives and progress in the use of allocated funds.
Need — problem or difficulty affecting concerned groups, which the public intervention aims to solve or overcome.
Norm — level that the intervention has to reach to be judged successful, in terms of a given criterion. For example, the cost per job created was satisfactory compared with a national norm based on a sample of comparable interventions.
Outcomes — the likely or achieved short- and medium-term effects of an intervention’s outputs, relating to the aim of the intervention. Specific changes in programme participants’ behaviour, knowledge, skills, status and level of functioning.
Outputs — direct products of programme activities which may include types, levels and targets of services to be delivered by the programme.
Process evaluation — one that focuses on programme implementation and operation. A process evaluation could address programme operation and performance.
Programme logic model — picture of how a policy/intervention works — the theory and assumptions underlying the programme. A programme logic model links outcomes (both short- and long-term) with programme activities/processes and the theoretical assumptions/principles of the programme.
Public managers — public (sometimes private) organisations responsible for implementing an intervention.
Random assignment — making a comparison group as similar as possible to the intervention group, to rule out external influences; randomly allocating individuals to either the intervention group or the control group.
Randomised controlled trial (RCT) — an experiment in which two or more interventions, possibly including a control intervention or no intervention, are compared by being randomly allocated to participants.
Relevance — the extent to which an intervention’s objectives are pertinent to the needs, problems and issues to be addressed.
Scope — precise definition of the evaluation object, i.e. what is being evaluated.
Stakeholders — individuals, groups or organisations with an interest in the evaluated intervention or in the evaluation itself, particularly authorities that decided on and nanced the intervention, managers, operators and spokespersons of the public concerned.
Steering group — the committee or group of stakeholders responsible for guiding the evaluation team.
Sustainability — the continuation of bene ts from an intervention after major development assistance has been completed; the probability of continued long-term benefits.
Terms of reference — the terms of reference de ne the work and the schedule that must be carried out by the evaluation team. They recall the regulatory framework and specify the scope of an evaluation. They state the main motives for an evaluation and the questions asked. They sum up available knowledge and outline an evaluation method. They describe the distribution of the work and responsibilities among the people participating in an evaluation process. They fix the schedule and, if possible, the budget. They specify the qualifications required of candidate teams as well as the criteria to be used to select an evaluation team.
Tool — standardised procedure used to fulfil a function of evaluation (e.g. regression analysis or questionnaire survey). Evaluation tools serve to collect quantitative or qualitative data, synthesise judgement criteria, explain objectives, estimate impacts, and so on.
Validity — the extent to which the indicator accurately measures what is purports to measure.
Value for money — a value for money evaluation is a judgement as to whether the outcomes achieved are sufficient given the level of resources used to achieve them. It generally includes an assessment of the cost of running the programme, its e ciency (the outputs it achieves for its inputs) and its effectiveness (the extent to which it has achieved expected outcomes) and uses analytical approaches such as cost-effectiveness or cost– benefit analyses.