to LTDI

Evaluation Cookbook

cookbook logo
Route map: Home=>Publications=>Evaluation Cookbook=>Exemplars
Example of Questionnaires and Group Interviews


Cookbook:
>Contents
>Exemplars 


Where and when was the study carried out?
During 1997 and 1998, a suite of computer-based learning software (Mathwise and Statwise) was used by several Scottish universities' mathematics departments. This was part of the SHEFC funded SUMSMAN project which saw all the Scottish university mathematics departments sharing resources and materials for (mainly) first year service teaching.

Departments at different universities integrated the software into their teaching in different ways, and, indeed, integration methods varied between members of staff at the same university. Some got the students to work through the package in supervised lab sessions either after or, in some cases, before meeting the topic in lectures. Some used the package in a similar way but did not staff the labs. Some replaced conventional lectures on a topic with directed study of part of the package. Some used the software as a remedial aid only, or purely as an optional extra resource whose existence was pointed out to students.

The software itself had already been extensively evaluated, but this study was designed to evaluate the various ways of integrating it into courses. Both staff and student reactions were sought, but only the aspects involving students are reported here.

What methods did we use?
We decided to survey all students involved in the study, by means of questionnaires. Although these included open questions to allow for unanticipated reactions, most of the questions were closed. This meant that we could gather and analyse large quantities of data quickly and cheaply and would be able to spot patterns and gauge the extent of variety in student views. The questionnaire design and analysis package Sphinx was used to handle and analyse such a large amount of data.

To gain a deeper insight into student views, two structured group interviews were held with students. To encourage comparison of integration methods and to spark discussion, each of these was planned to involve a group of about ten students from each of two universities who had studied the same topic or topics. The sessions were run by video-conference so that no one had to travel.

How many staff and students were involved?
In all, several hundred students and nearly twenty staff from five universities were exposed to the software in some way. Only three universities and about fifteen staff actually issued questionnaires to their students but nearly three hundred were returned, covering all the integration methods used and students from a wide range of faculties (arts, science, business and engineering!). Staff merely distributed and collected the questionnaires, all the analysis was done by one individual. Luckily, using the Sphinx software package made this relatively painless.

We expected ten or so students from each university to attend the two discussion interviews. However, on one occasion only five turned up at each site and on the other occasion no students could be persuaded to attend at all from one site. A member of the staff involved in the study was present at each site to lead and record the discussion, as well as to run the short ice-breaking session held prior to the actual interview.

What were the aims and objectives of the evaluation?
* To determine student reactions to the various ways of using the package, and to identify the 'best' ways of using it.
* To identify factors influencing how useful students found the package.

What did we find out?
The responses were remarkably consistent. Students overwhelmingly thought that computer-based learning was:
* best used in staffed labs
* most useful for:
 - practising material that they had just been taught,
 - trying to understand material that was unclear in a lecture,
 - revising for an assessment.

Factors found to influence how useful students found the package were:
* the imminence of an assessment,
* the enthusiasm of the staff and how well they sold it to their students,
* how well the content was perceived as matching their lectures,
* the speed at which the equipment allowed it to run,
* the amount of feedback provided by the package, and
* provision of accompanying paper-based support material.

Surprisingly, students who found the pace of the lectures 'too fast for me' were less likely than other students to find the CBL software useful.

What are our reflections on the evaluation methods used in this study?
We feel that the evaluation methods were very successful and achieved our aims.

The questionnaires yielded information in bulk and provided reassurance as to the reliability of the information. Despite being quite long (4 pages), there was little evidence of lack of care in their completion. Almost all students were still willing to write sensible and sometimes extensive comments in the open questions ('Main drawbacks', 'Main benefits') at the end. This was helped, as always in questionnaires, by making the early part of the questionnaire quick and easy to complete, and maintaining an interesting and logical progression of questions.

By getting students to write their matriculation number on the questionnaire, further analysis of how responses compare to, say, student ability or success in the course can be carried out later.

The group interviews gave fuller information and being a group session allowed comments to be taken up by other people and explored from several angles. Five students and one staff member at each site actually worked very well with all students feeling that they should say something. Having students from two different groups was also very successful with fresh ideas and differing experiences providing interest and allowing discussion of new points that had not occurred to one or other group. Using video-conferencing was a bit intimidating at first but people soon forgot about it. Staff involved felt that the benefits of the two-group sessions out-weighed any hassle in setting up the video-conferences.

Having the software at hand to refer to was helpful, and a demonstration at the start helped jog memories and break the ice. An initial ice-breaking exercise helped as well, as would have, in retrospect, provision of name badges. Forty-five minutes was found to be ample time for a session.

In this study the interviews were conducted after at least some of the questionnaire data was studied which allowed queries raised by the questionnaires to be investigated more fully in the interviews.

Judy Goldfinch and Kirsty Davidson
Napier University

versions for printing are available


[ Printable version ]
[ Exemplars | Contents ]

© All rights reserved LTDI and content authors.
Last modified: 26 March 1999.