to LTDI

Evaluation Cookbook

cookbook logo
Route map: Home=>Publications=>Evaluation Cookbook=>Exemplars
Evaluation of Online Assessment Materials


Cookbook:
>Contents
>Exemplars 


Where and when was the study carried out?
The initial study was carried out during 1996/97 within the Institute of Biomedical Life Sciences at the University of Glasgow and was part of the third year of the University's development of on-line formative and summative assessment materials, produced using Question Mark. A considerable amount of work has been carried out since then and both development and evaluation processes are still ongoing.

How many staff and students were involved?
The main work has been with Level One Biology, the first year course which all students take before selecting from a menu of options in Level Two towards specialism in third and fourth year. The class takes in around 700 students each year and provides four subject modules and a study project module. Four Associate Lecturers carry the main load of laboratory demonstration, teaching management and administration, with 19 module lecturers providing the learning content through readings, practical work, lectures and assignments.

Which evaluation techniques were involved?
During the first year of development, a set of self assessment items was produced for just one of the subject modules, 'Molecules, Cells and Genes'. The items were authored by the subject lecturer in collaboration with an associate lecturer with experience in using the software and with expertise in the field of assessment, and in objective testing in particular. The module ran in term one, the tests were trialled by student volunteers (220) after a revision lecture in term two, one or two weeks before the module exam.

User logs, post-trial questionnaires and an item in the standard post course evaluation questionnaire provided some information (in the main, positive) about the pilot.

A focus meeting was held a few days after the exam, with six students at the end of an afternoon lab. class and refreshments were provided.

Following this, further tests were developed for modules held in the second half of the year, and these again evaluated by students through logs, questionnaires and two focus meetings. For 1997/98 students were provided with self-assessment tests for all four subject modules. Further focus group meetings, concentrating on different issues to do with item design, interactivity, timing and addressing issues of on-line summative assessment were held. Development continues through this (1998/99) session and tests for second year modules are under design and pilot.

What were the aims and objectives of the evaluation?
 To establish student use of and attitudes to on-line assessment materials.
 To test the authoring software against our need to improve practice in objective test provision.
 To inform lecturing staff about the value of this form of assessment, in terms of development effort and pedagogical and practical benefit.
 To advise institutional management about investment issues - software, support staff, equipment resources.
 To provide the grounds for reflective development of assessment practice.

What did we find out?
Things that we expected, things that we didn't expect.

Students welcomed the tests as self-assessment and revision resources.

They particularly valued immediate and, where suitable, directive feedback.

The reasons they gave for their judgements reflected concerns beyond the practical in that they felt that the tests not only 'helped them know where they were' but also 'gave a better understanding of the course content'. It was the strength of their feeling that all modules should have such resources that moved development forward earlier than planned.

They picked up differences in question style and rhetoric, confirming our expectation (hope?) that the interactivity enabled by the software, and the potential for 'deeper' learning to be addressed, would be perceived by them. It was also welcomed by them.

The content of their discussion also indicated that attitudes to such uses of computer resources were shifting towards acceptance as familiar and commonplace elements of the classroom. That more than half the students said that they would have no objections to being summatively assessed in this way was a surprise. Because of the richer feedback provided by the method, allowing argument and elaboration as part of the data, we realised that what objections there were often had more to do with objective testing itself, rather than computer based assessment. This echoed staff feeling closely, and was important for the design and development of the overall assessment procedures for the modules and the course as a whole.

What are our reflections on this study?
One of the most interesting outcomes of the study, from our point of view at least, was the change in the staff attitude. Having seen examples of the kind of objective testing that could be supported and the serious and reflective nature of the student feedback, the lecturers began to realise the potential of the system. The further development of the new assessment procedures depended upon such realisation.

Rather than relying solely on the quantitative feedback from logs and questionnaires, or the more qualitative feedback from the few open question responses received from the questionnaire administration, we were able to 'play back' the transcripts of the focus meetings to the staff concerned. We felt that they would be the best interpreters of such feedback.

The methodology itself has now become an integral part of the long-term development of assessment procedures within the Level One class, and is becoming so for Level Two.

Erica McAteer and Liz Leonard
Glasgow University

versions for printing are available


[ Printable version ]
[ Exemplars | Contents ]

© All rights reserved LTDI and content authors.
Last modified: 26 March 1999.