Extrinsic summarization evaluation: A decision audit task

8Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this work we describe a large-scale extrinsic evaluation of automatic speech summarization technologies for meeting speech. The particular task is a decision audit, wherein a user must satisfy a complex information need, navigating several meetings in order to gain an understanding of how and why a given decision was made. We compare the usefulness of extractive and abstractive technologies in satisfying this information need, and assess the impact of automatic speech recognition (ASR) errors on user performance. We employ several evaluation methods for participant performance, including post-questionnaire data, human subjective and objective judgments, and an analysis of participant browsing behaviour. © 2008 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Murray, G., Kleinbauer, T., Poller, P., Renals, S., Kilgour, J., & Becker, T. (2008). Extrinsic summarization evaluation: A decision audit task. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5237 LNCS, pp. 349–361). Springer Verlag. https://doi.org/10.1007/978-3-540-85853-9_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free