|
LEARNING TECHNOLOGY DISSEMINATION INITIATIVE |
Evaluation Cookbook |
|
| Route map: Home |
|||
![]() |
Evaluation of Online Assessment Materials |
|
Where and when was the study carried out? How many staff and students were involved? Which evaluation techniques were involved? User logs, post-trial questionnaires and an item in the standard post course evaluation questionnaire provided some information (in the main, positive) about the pilot. A focus meeting was held a few days after the exam, with six students at the end of an afternoon lab. class and refreshments were provided. Following this, further tests were developed for modules held in the second half of the year, and these again evaluated by students through logs, questionnaires and two focus meetings. For 1997/98 students were provided with self-assessment tests for all four subject modules. Further focus group meetings, concentrating on different issues to do with item design, interactivity, timing and addressing issues of on-line summative assessment were held. Development continues through this (1998/99) session and tests for second year modules are under design and pilot. What were the aims and objectives of the evaluation? What did we find out? Students welcomed the tests as self-assessment and revision resources. They particularly valued immediate and, where suitable, directive feedback. The reasons they gave for their judgements reflected concerns beyond the practical in that they felt that the tests not only 'helped them know where they were' but also 'gave a better understanding of the course content'. It was the strength of their feeling that all modules should have such resources that moved development forward earlier than planned. They picked up differences in question style and rhetoric, confirming our expectation (hope?) that the interactivity enabled by the software, and the potential for 'deeper' learning to be addressed, would be perceived by them. It was also welcomed by them. The content of their discussion also indicated that attitudes to such uses of computer resources were shifting towards acceptance as familiar and commonplace elements of the classroom. That more than half the students said that they would have no objections to being summatively assessed in this way was a surprise. Because of the richer feedback provided by the method, allowing argument and elaboration as part of the data, we realised that what objections there were often had more to do with objective testing itself, rather than computer based assessment. This echoed staff feeling closely, and was important for the design and development of the overall assessment procedures for the modules and the course as a whole. What are our reflections on this study? Rather than relying solely on the quantitative feedback from logs and questionnaires, or the more qualitative feedback from the few open question responses received from the questionnaire administration, we were able to 'play back' the transcripts of the focus meetings to the staff concerned. We felt that they would be the best interpreters of such feedback. The methodology itself has now become an integral part of the long-term development of assessment procedures within the Level One class, and is becoming so for Level Two. Erica McAteer and Liz Leonard |
[ Printable version ]
[ Exemplars |
Contents ]
© All rights reserved LTDI and content authors.
Last modified: 26 March 1999.