Dealing with spoken requests in a multimodal question answering system

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper reports on experiments performed in the development of the QALL-ME system, a multilingual QA infrastructure capable of handling input requests both in written and spoken form. Our objective is to estimate the impact of dealing with automatically transcribed (i.e. noisy) requests on a specific question interpretation task, namely the extraction of relations from natural language questions. A number of experiments are presented, featuring different combinations of manually and automatically transcribed questions datasets to train and evaluate the system. Results (ranging from 0.624 to 0.634 F-measure in the recogniton of the relations expressed by a question) demonstrate that the impact of noisy data on question interpretation is negligible with all the combinations of training/test data. This shows that the benefits of enabling speech access capabilities, allowing for a more natural human-machine interaction, outweight the minimal loss in terms of performance. © Springer-Verlag Berlin Heidelberg 2008.

Cite

CITATION STYLE

APA

Gretter, R., Kouylekov, M., & Negri, M. (2008). Dealing with spoken requests in a multimodal question answering system. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5253 LNAI, pp. 93–102). https://doi.org/10.1007/978-3-540-85776-1_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free