Evaluation Cookbook

cookbook logo
Route map: Home=>Publications=>Evaluation Cookbook=>Why evaluate
Why Evaluate?


The Costs of Evaluating
Evaluations are costly. Even the simplest takes precious time from other activities. Apart from the too familiar questionnaire that we usually ignore at the end of some workshop or presentation, a lot of careful thought has to go into constructing a worthwhile evaluation. Then there's teaching time which has to be sacrificed to getting the students to complete questionnaires. And the leisure time you have to devote to interpreting the results and writing them up in a form that others can understand. So why evaluate? Well sometimes you'll be forced to because future funding or support depends on it. But more often you'll evaluate to learn. If there's to be no action taken as a result of the evaluation then, unless you just want material for a paper, the effort is not worth pursuing.

Who Gains?
The starting point for any evaluation is to identify the stakeholders. In some cases the stake is hovering above your project and you're looking for evidence to prevent it being driven home. But usually the evaluation is being conducted to bring some benefit to one of the groups of stakeholders.

Let's start with students, since often they are an afterthought. What are the concerns of the group you're targeting? There are some obvious areas which interest them, from gaining a better education, through issues of accessibility, to the passing of the coming exam. As with all the stakeholders, don't plunge into constructing the evaluation without talking to them and exploring their concerns around the educational intervention on which you're focusing. Then the resulting evaluation will be centred on discovering how the intervention can be improved to satisfy the real aims of the target audience rather than what you decided they should be.

Evaluating for developers is more straightforward. Given that the content is appropriate, the developer is interested in how easy or difficult the user found it to access the material. Were there any bugs? Was the navigation instinctive? Was the text in a suitable font and was it presented in appropriate volume? Was the feedback provided at the right place and did it satisfy the user? And so on.

Lecturers want to know about learning gains and efficiency. Was this a better way of presenting the material than the tutorial or the lecture? Did it free up time for more effective contact with the student or to conduct research? Are there additions such as handouts which need to be considered to improve the effectiveness of the intervention?

Management need evidence that the time spent on development has led to greater efficiency while maintaining at least the same quality. Does the product justify the costs? Have the students welcomed the change and will the course continue to attract recruits? Have the exam results remained acceptable? Will it help with the TQA? Can fewer resources be devoted to the course than before?

There are usually other stakeholders who have an wider interest in the results of evaluations, especially of computer assisted learning. The Funding Councils, for example, wish to consider whether money is well spent in this area, and though a large external evaluation will usually be conducted to provide the answer, the sum of small local evaluations feed into the decision.

Will it be Worth it?
So, before you embark on an evaluation, ask yourself "why bother?". Who is this for, what is it they want to find out, and what changes will be made when the results are gathered? If the answer to the question "why evaluate?" is that the results will lead to action to improve the teaching and learning within the course or the institution, then all the effort will be worthwhile.

Robin Shaw
TLTSN Consultant,
University of Glasgow

versions for printing are available

[ Why | Directing | Directing ]
[ Printable version ]
[ Preparation | Contents ]

© All rights reserved LTDI and content authors.
Last modified: 26 March 1999.