UNED@CL-SR CLEF 2005: Mixing different strategies to retrieve automatic speech transcriptions

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we describe UNED's participation in the CLEF CL-SR 2005 track. First, we explain how we tried several strategies to clean up the automatic transcriptions. Then, we describe how we performed 84 different runs mixing these strategies with named entity recognition and different pseudo-relevance feedback approaches, in order to study the influence of each method in the retrieval process, both in monolingual and cross-lingual environments. We noticed that the influence of named entity recognition was higher in the cross-lingual environment, where MAP scores double when we take advantage of an entity recognizer. The best pseudo-relevance feedback approach was the one using manual keywords. The effects of the different cleaning strategies were very similar, except for character 3-grams, which obtained poor scores compared with other approaches. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

López-Ostenero, F., Peinado, V., Sama, V., & Verdejo, F. (2006). UNED@CL-SR CLEF 2005: Mixing different strategies to retrieve automatic speech transcriptions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4022 LNCS, pp. 783–791). Springer Verlag. https://doi.org/10.1007/11878773_86

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free