Evaluation Cookbook

cookbook logo
Route map: Home=>Publications=>Evaluation Cookbook=>Directing your evaluation
Directing Your Evaluation


Evaluation studies are fundamentally about asking questions, and then designing ways to try to find useful answers.

Studies may concern materials, projects, courses, methods, packages, or systems; in fact, anything that can be asked about in a detailed, structured fashion.

In formative evaluation, information can be transferred back into the original work to both strengthen and move it forward. It is an ongoing, fluid process, used to gauge overall progress and areas needing some attention or change, helping to mould the final article. In summative evaluation, the information is intended to give an overall picture at the end of a stage, often measured against fixed criteria. Summative evaluation provides a fixed point of reference, and it may provide a measure of success or otherwise against original objectives and planned outcomes or it may include reactions from participants to a goal free investigation.

It is crucial to take time at the very beginning to determine which are the "right" questions. Inappropriate or unrealistic questions will lead to unusable or irrelevant data, rather like setting up a computer to perform a complex calculation only to find it was given the wrong formula to start with. But it may also become apparent during a study that some questions are unhelpful and need to be changed, and others added, so build in enough flexibility and open-endedness.

Think about the framework of the proposed study, and how this fits in with the work it is intended to evaluate. The following headings are offered as a starting point, and include suggestions to help determine what aspects are most important in your particular study. The items are given in no special order, but are intended to provoke thought.

What will your evaluation do?
When you plan a meal you know if you want a sumptuous banquet or a snack to eat in front of the TV. You also consider your guests and their tastes as well as the budget and time you have available. Similarly, when you plan an evaluation you must consider the purposes, the interests of those involved and the practical limitations.

Are you:
 putting a trial software product in front of potential users?
 doing a preliminary survey to determine a need for a particular service or product?
 carrying out an information poll for a third party?
 testing a final system under its real everyday circumstances?

Are you looking at developing a comprehensive, multi-stage evaluation, requiring several smaller self-contained studies? Is there a need for several of these studies at different stages in development or will a single one do?

Who is the evaluation for?
There will probably be several interested parties e.g.
 those with a direct investment (the stakeholders);
 those who may be carrying out similar work in the future;
 those you may want to educate through your work.

In a new course, the key stakeholders may have different concerns.
 The students may be more interested in a formative evaluation that can address any problems before the end of their course;
 A lecturer trying out a new piece of software may want to evaluate its potential for transfer to other courses;
 Senior managers may be interested in comparisons between different courses in terms of completion rates and customer satisfaction;
 Employers may be interested in the demonstrable skills of those taking the course.

You may not be able to satisfy all the needs but you can try to explain what you see as the main purpose of the evaluation.

Remember too that YOU are making the investment in performing the study. What type of information is most important for you to meet your goals and objectives? What information will help you to convince key groups of the value of your work? What areas of your work would you like to examine more closely?

Performing an evaluation study is a good opportunity to be able to stand back from the work you are doing and appraise it. Critically there may be specific questions such as "Are my students struggling with the new module design?", "Are we being cost effective?" or "Are there any specific gaps in the system we haven't noticed?" a well-designed study can draw all of these concerns together to provide an overall picture.

Can you deal with the practicalities?
What is the size and scale of your evaluation in terms of numbers involved and the timescale? If you have a large number of students you may want to sample their performance and views. If you are evaluating in a number of different contexts you may want to choose varying environments. You may need a quick answer to let you make a decision next week or you may want to analyse long term effects over time.

You will need to consider who will carry out the evaluation. An internal evaluator will understand the nuances of the context but an external person may be more objective. Can you get help? For example, you may be able to employ a research assistant for a few hours to do some interviews or a computer analysis of results. Estimate the time needed for each stage - planning, designing instruments, collecting data, analysing information, making decisions and reporting findings. Make sure you choose the best time to carry out the evaluation - when enough has happened, but not when the respondents are busy with exams. Also consider the timing of your study. Does it have to fit into an external schedule? For example, if you are working with a development team, what is their release calendar? If you are working with students, when is the course delivered? Is the release schedule compatible with the course schedule and is either negotiable? Co-ordinate the focus of the study with the state of the work at the time it is actually going to be evaluated, rather than as it exists during the designing period of the study.

Also consider the costs involved, e.g. paper and printing, post and phone, travel, and computer software, as well as the time of the personnel.

What methods are best?
The way information is presented can be crucial to how seriously key parties perceive the study. Different types of information convince different people. Equally, the form in which information is gathered restricts the ways in which it can be used. Quantitative measurements and hard facts may be of more use in demonstrating concrete achievement to funders and top management, but qualitative feedback is generally far more useful in establishing improvements necessary for users of a system, or to benefit students on a course.

Resource levels will restrict the amount of information you can most usefully gather and process, but the most sensible method will be dictated by the driving force for the study, accountability, and whether it is intended to be a formative or summative study. The information you choose to gather will ultimately affect the tools and techniques you adopt, with consequences for the resources you require to complete the study successfully.

A key part of the planning is to choose appropriate sources of information (e.g. students, staff, documents) and methods of collecting evidence. Much of this book is designed to help you select suitable approaches. The purposes of the evaluation and the practical features will have some impact on your methodology. Use a variety of methods so that findings from one source can substantiate others. Or the findings from one method can help the design of another, e.g. topics from a group discussion can lead to some of the questions in a survey; comments from the survey could identify issues to be explored in interviews.

It is important to collect as much information as appropriate, but not to exceed the resource base available. The information gathered will need to be refined from one study to the next. Some material will be shown to be less useful than anticipated, while other areas will throw up gaps that would benefit from further examination. Methods of evaluation can also be changed or adapted to fit in with the practicalities of the situation. As each study develops, the process of defining the next study will become progressively easier.

What impact will it have?
Evaluation can be a delicate topic and should be handled sensitively. If you ask similar questions about an innovation of students, lecturers and technicians, you may get conflicting views, so you will need to decide how to cope with the situation. Do not ask questions that raise unrealistic hopes. How will you support a lecturer who gets a lot of negative comments from students? Some aspects may need to be confidential and anonymous. How will you monitor and deal with unintended outcomes? Many potentially difficult situations can be avoided if you explain the purpose of the evaluation in advance and if you share the outcomes with all involved.

The study is your opportunity to make contact with those people who can provide the best feedback on the area of work being evaluated. Who are the people who will be most affected by your work? Who will use what you are creating? What are their needs? How do you think they might be able to help you? Can you use the study to make contact with external groups by providing a common purpose? What information are you missing that has to be gained from other sources? Naturally, you do not want to alienate any of these groups, so thought about how you approach them will make your evaluation run more smoothly.

What are your deliverables?
How will the results of the study be distributed and to whom? How will the results be implemented into your work? Will responses be directly fed back into course, product, or system, or will a formal report of some type be required? Should you publish the results? Do you perhaps need several forms of presentation depending on the group of people requiring the results?

As you consider each of the above questions, a structure or framework for the study should evolve. This may show that a series of studies would be more valuable. These may be divided into evaluation phases each building on the information generated by the previous phase, or you may design a series of smaller studies, each dealing with a different aspect of knowledge. You must keep the study design flexible to allow for adaptations as results are obtained or as requirements change. The process of evaluation is iterative, and each study must be based on both current needs and previous findings. Working within tight time and resource constraints makes it more and more important to get the initial question right each time.

Gaye Manwaring
Northern College

Gayle Calverley
Academic Services Learning Development
The University of Hull

versions for printing are available

[ Why | Directing | Directing ]
[ Printable version ]
[ Preparation | Contents ]

© All rights reserved LTDI and content authors.
Last modified: 26 March 1999.