There are two issues in evaluating multimedia authoring environments such as SHIVA: the effectiveness of the support for authoring and the educational effectiveness of courses designed with the system. There are few evaluation methods which have been specifically designed for authoring systems. Our approach is to use a variety of methods. For the usability of the interface, a variant of Task-Action Grammar known as D-TAG was used to capture the display-oriented nature of the interaction, also think aloud protocols with members of the design team. Detailed observational studies with experienced authors using SHIVA addressed the functional model of authoring underlying the system. One unexpected finding was that experienced users developed a common library of "cliches" of logical sequences of material identifiable from their visual patterns. They were also able to parse flowcharts rapidly, even when unfamiliar with the teaching domain. The spatial layout enabled authors to create a visual course structure of concepts and frames, and the overall visual appearance of the concept network appeared to strongly affect authors' predictions of the teaching sequence. One of the major problems with such an evaluation is one of scale. Meaningful evaluation requires realistic tasks for authoring courses entailing several hours of contact time, where real training needs have been identified, and realistic settings with professional authoring teams. Traditional task-analytic methods are too low-level, and we were unable to represent interface operations which appear at a higher level in the user's task. A thorough evaluation of the system should ideally incorporate a number of methods, each addressing a different aspect of the system's use and at different levels of granularity, preferably over a period of at least 2 yr. © 1991.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below