Challenges in evaluating complex decision support systems: Lessons from Design-a-Trial

6Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Decision support system developers and users agree on the need for rigorous evaluations of performance and impact. Evaluating simple reminder systems is relatively easy because there is usually a gold standard of decision quality. However, when a system generates complex output (such as a critique or graphical report), it is much less obvious how to evaluate it. We discuss some generic problems and how one might resolve them, using as a case study Design-a-Trial, a DSS to help clinicians write a lengthy trial protocol.

Cite

CITATION STYLE

APA

Potts, H. W. W., Wyatt, J. C., & Altman, D. G. (2001). Challenges in evaluating complex decision support systems: Lessons from Design-a-Trial. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2101, pp. 453–456). Springer Verlag. https://doi.org/10.1007/3-540-48229-6_61

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free