Evaluation Cookbook

cookbook logo
Route map: Home=>Publications=>Evaluation Cookbook=>Information Page
Isolation or Integration
 Main   Links page   References   Add comment 


Cookbook:
>Contents
>Info Pages


How to Evaluate Learning with Technology?
Recent developments in CAL evaluation methodology show a definite shift from isolation to integration as the interdependence of content, context and individual characteristics became increasingly recognized. This shift reflects concurrent developments in learning research and evaluation in non-technology related fields. Although attempts to separate the strands seem redundant as technology continues to permeate all functions of higher education institutions. While the reasons for evaluation may remain the same, i.e. :
o to assess (and improve) the effectiveness of whole courses and their various components
o to identify the influences and effects of various contextual factors
the rationale, assumptions and methods have changed considerably during the relatively short history of the discipline.

When computer assisted learning first became popular in the 1960s, evaluation typically meant attempting to isolate the effects of a single resource, application of sampling methods designed to balance individual differences among the study population and creation of a ‘clean’ experimental situation where objective truth about the impact of a particular intervention could be revealed. Thankfully, CAL technology is not the only thing that has come a long way since the 60s. Learning evaluation as a discipline, and studies of CAL in particular, have developed, through experience, into something infinitely more sensitive to the impact of innovations and appreciative of the influence of personal and contextual factors such as prior knowledge, learning style, integration into course structures, instructional strategy, design and support. In fact, the basis has shifted through 180 degrees, from a predictive, hypothesis testing model to a responsive process from which hypothesis or theory generation is the outcome. A brief and approximate history of developments reveals the following milestone events.

1960s
CAL Types
Computer assisted instruction, programmed learning, branching programs

Evaluation
Controlled, experimental studies based on the behaviourist, measurement oriented paradigm articulated in the 1930s by Ralph Tyler and Skinnerian stimulus – response related assumptions about learning. Learning is still regarded as independent of subject or context.

Findings
Scores and outcomes based, no relevance attached to process or contextual factors

1970s
CAL Types
Tutorial programs, simulations.

Evaluation
Still predominantly experimental but with an emerging counter-culture, traceable to the 60s, that argued for process oriented descriptions of programs in use in specific situations, and recognized the importance of social, political and economic factors. Methods associated with the ‘new’ evaluation are varied and include interviews, questionnaires, profiles, think aloud protocols, observations etc.

Findings
Descriptive and indicative of many contributory factors to effective learning outcomes, e.g. teaching and learning styles, prior knowledge, motivation, classroom culture, assessment. Initially case specific though generalizable through grounded theory type development.

1980s
CAL Types
Microworlds, complex simulations, intelligent tutoring, generative programs

Evaluation
The need for responsive/evaluative methods is clear but academic credibility for the qualitative methodology is still hard won. Naturalistic methods based on the interpretive and critical paradigms are increasingly popular as experimental methods consistently fail to produce sufficient detail for designers’ and evaluators purposes in formative and summative studies. Usability studies take precedence over learning evaluation and CAL design guidelines and standards evolve.

Findings
Results of formative evaluation and various forms of user testing become important inputs to development, and the iterative design cycle is established. Case and situation specific factors are identified and reported as the shift away from large experimental studies and generalizable results on learning issues continues.

1990s
CAL Types
Online courses, user generated resources, full multimedia simulations and tutorial CAL

Evaluation
Integrative response studies are conducted in authentic contexts using mixed methods and multiple data sources. Methods must accommodate situations where teachers and learners may never meet face to face. Evaluation is now accepted as an important and ongoing aspect of program and course improvement, the importance of context is undisputed and attempts to isolate the effects of CAL are less relevant than assessment of how it works in conjunction with other resources.

Findings
Part of an ongoing process which feeds back into a plan - implement - evaluate - improve loop. Learning objectives, means of assessment and opportunities for data collection are determinants of what findings will be sought and how they will be used. Studies involve qualitative and quantitative measures as appropriate.

Conclusions
A number of valid approaches to evaluation are currently in use: one common variation being in how broadly the term is defined. A narrow perspective is where the effectiveness of a particular program or part of a program is assessed in relative isolation from the wider context in which it is used. An example of this would be where a tutorial program for teaching the economics concept of price is evaluated immediately following students use of the program. Demonstrated understanding of the concept would be one measure of effectiveness, ability to apply it in different situations may be another. It would be useful to know, e.g. if students had any prior knowledge of the concept, had learned it from a textbook or other source, then reinforced it through use of the CAL program, and whether they would be able to transfer the new concept to other applicable subjects such as accounting or marketing. A broader perspective might include how well the CAL program is integrated into the whole course and assessment structure, and how CAL use in general is viewed by students, presented by lecturers and supported by the institution. All these factors can influence the effectiveness of learning outcomes, even although they may not relate directly to the design and use of a particular piece of courseware.

It may be concluded then, that the purpose of the evaluation will define its scope. Courseware developers may be more concerned with the design related aspects while organization, policy or staff developers may tend to look at the broader picture. However, all perspectives require some attention to contextual factors and the influence they bring to students use of courseware and the effectiveness, or otherwise, of learning outcomes.

Cathy Gunn
Educational Technologies Advisor
CPD, University of Aukland

versions for printing are available

[ Main | Links | Add comment ]
[ Printable version ]
[ Contents | Information pages ]

© All rights reserved LTDI and content authors.
Last modified: 25 March 1999.