Content

Evaluation

Key Public Health Questions

How effective was the intervention?

How could the intervention be improved?

Use this step if you:

  • Have started designing your intervention
  • Have started implementing your intervention
  • Need to know how you are going to keep track of your progress and successes
  • Need to measure your impact on your target audience and against your aims and objectives.

What is evaluation?

Evaluation is the process of determining the merit, worth and value of things and evaluations are the products of that process.1a

Merit is defined as the intrinsic value of something (an intervention, program, project or activity). Does the item being evaluated do well at what it is supposed to do?1

Worth is defined as the value of the intervention, program, person or activities in relation to a specific purpose. Is the object of high quality and also something a target group needs?1

Importance of evaluation

Evaluation is important for a number of reasons. It is important because it can help2:

  • Assess if an intervention has achieved its intended aim
  • Understand how an intervention has achieved its intended aim, or why it may not have done so
  • Identify how efficient an intervention was in converting resources into activities to achieve its objectives and aims
  • Assess how sustainable and meaningful an intervention was for the target audience and participants
  • Inform decision makers about how to build on or improve the intervention.

The difference between monitoring and evaluation

Evaluating an intervention often includes two key components: monitoring and evaluation.

Monitoring – monitoring an intervention allows you to track its progress and helps you to identify issues early in its implementation that could be improved or corrected to make the intervention as effective as possible. Monitoring helps embed a cycle of continuous improvement into the implementation of an intervention and is a key component of the evaluation.

Evaluation – evaluating an intervention at the end of its implementation, or evaluating key components of an intervention allows you to judge the success of the activities you have undertaken. It provides accountability in relation to funding, it allows you to repeat activities that have been demonstrated to work, and you can improve on, or let go activities that do not work. It also allows others to learn from the work that you have done.

Importance of evaluation and program planning

The quality of an evaluation is influenced by the quality of the intervention planning.3

Ideally you should be planning for your evaluation at the same time as you are planning your intervention.  Setting up the evaluation at the same time you are planning your intervention will save you considerable time and effort in the long run.

A clear intervention or program plan that details the following will provide a solid foundation for evaluation. It is important to clearly articulate:

  • Aim and objectives
  • Activities
  • Target audience
  • Context
  • Supporting evidence
  • Assumptions
  • Implementation or enabling strategies.

See Learn – Surveillance, Determinants, Intervention and Implementation sections for more details on these steps.

A clear evaluation plan is will also provide a solid foundation for the evaluation.

Check out this document by the Department of Health WA for ideas on how to develop an evaluation plan.

Where to start?

Program logic

One way of outlining the intervention is through the development of a program logic model.

A logic model is a pictorial snapshot of the proposed intervention, which provides a visual representation of the assumed relationships between the elements of the intervention (discussed below), identifies any gaps that may exist in the plan and clarifies which elements will be measured.1

Logic models can be developed in a number of ways depending on the intervention. Despite this, it should never be too complex.

A basic logic model includes the following elements 1:

  • Context – this includes the policy and evidence context that the intervention will operate in.
  • Inputs – this includes the resources that are accessible to an intervention.
  • Activities – activities are the actions which are carried out during the implementation of the intervention.
  • Outputs – Outputs are the direct results of the activities.
  • Impacts – corresponding with the objectives, impacts refer to the short and medium term changes that may result from the activities.
  • Outcomes – corresponding with the aim, outcomes are the long term changes that may result from the activities or make a contribution towards achieving them.

This template, modified from the Department of Health WA’s Research and Evaluation Framework, will assist in developing a program logic model.

 Evaluation questions

An important part of the evaluation is to determine what it is you want the evaluation to tell you; because it is the evaluation questions that will determine what the rest of your evaluation looks like.  Evaluation questions generally tend to be clustered around three ideas:

  • What happened?
  • Was the intervention successful?
  • What have we learnt?
What happened?

This question provides the opportunity to focus on what were the outcomes of your intervention, including 4:

  • Were there any changes to the target audiences’ knowledge, attitudes, beliefs and behaviours?
  • Were there any changes to relationships in the community?
  • Were these consequences intended or unintended?
Was the intervention successful?

This question allows you to considered whether the intervention achieved what it set out to achieve. It focuses on the performance of the intervention after it has finished.4 Questions that are often asked under this heading include:

  • Did the intervention achieve its objectives (that is, its intended short and medium-term outcomes)?
  • Was the activities implemented in the best possible way?
  • What were the critical success factors for the intervention’s implementation?
  • In what ways did the intervention provide value for money?
  • Was the intervention efficient?
  • In what ways did the intervention’s delivered outcomes address the initial need or problem?
What have we learnt?

This question allows you to focus on lessons that were learned and it is often used to build the knowledge and evidence base in a particular area of a particular type of intervention. Questions under this heading include4:

  • What works, for whom, in what circumstances?
  • What critical factors contributed to the successes and challenges of the intervention?
  • How easily and under what circumstances could the successes be replicated?

These core questions guide the choice of more specific evaluation questions and appropriate data collection and analysis methods.

However, be realistic as to what can be achieved. It may be useful to start by brainstorming all the questions that would be interesting to address but then make sure you identify the priority questions that can be addressed with the resources available.

Evaluation design

Types of Evaluation

As you start to think about your evaluation questions, it becomes more obvious that the two aspects that you tend to be focusing on in your evaluation is the activities and the impact of your intervention. This corresponds to process, impact and outcome evaluation. They allow different aspects of the intervention to be evaluated and collectively contribute to the evaluation of the overall intervention (which is sometimes called summative evaluation).

These three evaluation types directly intersect with planning an intervention, by way of summary as shown in Figure 11:

  1. Process evaluation measures the intervention activities and the extent to which it has been implemented.
  1. Impact evaluation usually correlates to the program objectives as it is concerned with the immediate and short term effects of the program.
  1. Outcome evaluation focuses on the longer term effects of the program and usually relates to the program aims.

Making sure you have clearly defined aims, objectives and activities is important to achieving sound a process, impact and outcome evaluation1.

Figure 1: The relationship between process, impact and outcome evaluation

Process Impact Outcome Evaluation

Having identified the key questions to be answered by the evaluation, you will need to identify what information would be needed to answer these questions and the overall evaluation design that would generate this information5.

Evaluation designs include:

  • Quantitative designs, which collect numerical data (for example, pre/post surveys with or without a comparison group, trend analysis) and
  • Qualitative designs, which collect written or spoken data (for example interviews, focus groups, case studies, document analysis and participatory action research).2

Often quantitative designs are used to measure impacts, while qualitative designs are useful in process evaluation, but this is not always the case.

The Australian Bureau of Statistics provides a great description of these two concepts here.

The perfect type of evaluation design for any intervention is influenced by a number of elements including the programs complexity, duration and progression.1

After factoring in any practical and financial limitations it is important to choose the design that provides the best level of evidence possible.2 You should try to make sure that your evaluation designs are rigorous (whilst maintaining their real world application) by using validated tools and incorporating a range of evaluation methods.2

To demonstrate that the intervention caused changes in outcomes or impacts, the evaluation design must demonstrate what would have happened in the absence of the intervention.3

Below are some commonly used types of evaluations that attempt to provide information on the link between an intervention and its impacts and outcomes2:

  • Randomised experimental design are the most rigorous evaluation designs for collecting this information as intervention participants are randomly placed in either a group that receives the intervention or a control group that does not receive the intervention.
  • A quasi-experimental design is often used (particularly when randomized experiments are not applicable; which is often the case in the real world nature of injury prevention interventions) as the design uses an intervention and a control group but the allocation of participants to the groups is not random.
  • Non-experimental evaluation design attempts to determine the impact of the intervention without having a control group. Without a control group it is difficult to determine what would have happened without the intervention, however due to a range of elements influencing the interventions evaluation design, in some cases the use of a non-experimental design is the most realistic practical option.

Evaluation method and data collection

As noted above, the quality of any evaluation is influenced by the strength of evidence collected in response to the evaluation questions, making it vital that the evaluation methods used are appropriate to the intervention.3 There are a number of factors that influence the selection of data collection methods including the evaluation purpose, the evaluation questions, available finances, capacity of the evaluation team and the skills of the evaluation team.2

Just to recap – if the evaluation is seeking to provide descriptions about participants experiences, attitudes or beliefs, it is often appropriate to use qualitative methods of evaluation as they allow the ‘how’ and ‘why’ questions to be asked through qualitative data collection methods.3 Qualitative methods can be time consuming and subjective, but they have the ability to capture in-depth information and in some cases provide real life context.

Alternatively, if the evaluation is aiming to collect data relating to the presence of health issues or behaviours, quantitative methods can collect statistical data.3 Quantitative methods are usually easy to administer and provided the opportunity for a large amount of data to be collected and analysed.

No matter what evaluation methods are used forward planning is helpful, as it increases the chance of collecting the required data throughout the intervention.1 To assist in achieving this it is important that all required data is identified in the early stages of the intervention, along with the methods that will be used to collect this data and a timeline.1

Indicators

You will need to develop some indicators or measures that you will use to collect the information and data that answers your evaluation questions.

Click here for more information about evaluation indicators.

Evaluation instruments

After you have developed your indicators then you need to work out what instruments or data collection tools and processes you are going to use to capture this information.  When selecting the appropriate evaluation tools it is important that the validity, reliability and practicality of the tools are considered in relation to the proposed context of use.

Quantitative data can be collected through:

  • Archived data
  • Clinical tests
  • Close-ended questionnaires
  • Performance tests
  • Surveys.

Qualitative data can be collected through:

  • Case studies
  • Focus groups
  • Meeting minutes
  • Observational interviews
  • Open-ended questionnaires.

Data collection

This stage is where your evaluation plan is put into action by collecting the data. The way you collect data and the types of data you collect will depend on the evaluation design and data collection methods you selected in the previous step.

You need to coordinate data collection by specifying:

  • what tasks need to be completed
  • who should undertake the tasks
  • when the tasks should be undertaken
  • what resources are required

Access the Vic Health – Planning for effective health promotion evaluation  for more information about data collection.

Data analysis and interpretation

Data analysis involves identifying and summarising the key findings, themes and information contained in the previous steps. This process allows you to identify processes, impacts and, in the longer-term, outcomes in order to answer your evaluation questions.

Data analysis and interpretation is also an important process to inform the evidence base and monitor and refine the intervention. So allocating sufficient time and resources to data analysis and interpretation is important.

Effective analysis and interpretation of the intervention’s raw data forms the foundation of demonstrating the intervention’s effectiveness.1 By identifying and summarising the key findings and themes within the raw data, it allows the strengths and limitations of the intervention to be highlighted and recommendations to be formulated.3

 Qualitative data

The analysis of qualitative data involves identifying themes in the data that is, “broad categories of comments or information” or “big ideas”.3 You will need to study the data to identify what you consider the major themes to be and then classifying and group the data according to these themes. The following gives you some examples of how this can be done3:

  • Physically cut out material from the transcribed data and paste it onto large sheets, with each sheet being used to group information or quotes on one of the themes.
  • Electronically cut and paste in any word processing software.
  • Go through hard copies of your data and using a series of different coloured highlighter pens, indicate material related to each theme.
  • There is also software packages, such as In Vivo which allows you to do a electronic analysis of the data into multiple themes and also helps you present the information in a synthesis after it has been analysed.

Quantitative data

Quantitative data tends to be analysed and presented as frequencies, measurements or percentages, and may involve statistical calculations of averages or differences over time for individuals or groups or between individuals or groups. The information is often presented in summary tables, or relevant graphs; and can often be done in a package like Microsoft Excel.

When the impact evaluation (rather than the process evaluation) is being determined or measured it may be necessary to use statistical tests to demonstrate that any difference observed is in fact ‘significant’. In these situations, you may need to enlist the help of a statistician or a colleague with some training in statistics who will most likely use a particular statistical software package.  The evaluation questions will be really important to help guide a third party undertaking the analysis of quantitative data.

 A note about bias

To minimize any bias when interpreting the results it may be appropriate to ensure that the person responsible for data analysis is different from the person responsible for implementing the intervention. However if the analysis is completed by someone who has had no role in the intervention implementation they will need to develop a detailed understanding of the intervention. This could by through having a discussion with the interventions implementation team before formulating the recommendations from the results and to support the validity of the intervention, the intervention team should request a clear justification of the analysis methods used.3

Recommendations, dissemination and communication

No matter what data analysis techniques are used or what the results are, by developing recommendations, disseminating the findings and gaining an understanding of why the results occurred, the overall intervention evaluation can play a significant role in future intervention development.3

Dissemination of health promotion evaluation findings is important for a number of reasons:

  • It is crucial in establishing a strong evidence base for injury prevention.
  • It is important to document not only what worked, but what didn’t work and what some of the reasons for success and failure might be.

As noted above the evaluation should have a clear purpose and should consider the potential audience for the recommendations. The nature of evaluation reports and other forms of dissemination will vary depending on this audience. Reports to funding bodies and committees of management may differ in detail and presentation format from reports for project staff or for client groups and the wider community. It is, therefore, important that you develop a dissemination strategy.3

Some questions that can help shape your reporting and dissemination strategy include:

  • Who should have access to the results of the evaluation?
  • What is an ideal format for ensuring adequate and accessible information to your selected audience(s)?
  • How will evaluation data be used and stored within the agency to ensure that future interventions are able to build on the knowledge base achieved during the evaluation?
  • How could or should results be distributed more widely so that other practitioners are able to know about your work?

Avenues for wider dissemination of intervention details and evaluation results include organisational and regional newsletters, articles in professional journals, network meetings, workshops, presenting at conferences.

Resources

Ethical Consideration

WA Department of Health Research and Evaluation Framework Implementation Guide. This is a guide for not-for-profit tenders, delivering and reporting on health promotion programs funded by the Chronic Disease Prevention Directorate.

How to plan an evaluation by Dr Matt Merema, Senior Research Officer, Chronic Disease Prevention Directorate, Department of Health WA

References

1a Scriven, M. (1991). Evaluation thesaurus (4th ed.). Newbury Park, CA: Sage.

1 Department of Health Western Australia (2013). Research and Evaluation Framework and Implementation Guide. Chronic Disease Prevention Directorate, Department of Health, Western Australia, Perth, Australia. Retrieved from:http://www.public.health.wa.gov.au/cproot/5487/2/130902%20final-research-and-evaluation-framework_Implementation-guidex.pdf

2 Evaluation Toolbox (2010). Community Sustainability Engagement Evaluation Toolbox: Why evaluate? Victoria, Australia. Retrieved from: http://evaluationtoolbox.net.au/index.php?option=com_content&view=article&id=12&Itemid=18

Round, R. Marshall, B. and Horton, K. (2005). Planning for effective health promotion evaluation, Victorian Government Department of Human Services, Melbourne, Australia. Retrieved from:http://docs2.health.vic.gov.au/docs/doc/32F5DB093231F5D3CA257B27001E19D0/$FILE/planning_may05_2.pdf

4 Queensland Government (2011). Community engagement guidelines and factsheets. Step3: Identifying evaluation questions and information requirements. Retrieved from: http://www.qld.gov.au/web/community-engagement/guides-factsheets/evaluating/evaluation-framework-3.html