Decision support system developers and users agree on the need for rigorous evaluations of performance and impact. Evaluating simple reminder systems is relatively easy because there is usually a gold standard of decision quality. However, when a system generates complex output (such as a critique or graphical report), it is much less obvious how to evaluate it. We discuss some generic problems and how one might resolve them, using as a case study Design-a-Trial, a DSS to help clinicians write a lengthy trial protocol.
CITATION STYLE
Potts, H. W. W., Wyatt, J. C., & Altman, D. G. (2001). Challenges in evaluating complex decision support systems: Lessons from Design-a-Trial. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2101, pp. 453–456). Springer Verlag. https://doi.org/10.1007/3-540-48229-6_61
Mendeley helps you to discover research relevant for your work.