The University of Maryland participated in the English and Czech tasks. For English, one monolingual run using fields based on automatic transcription (the required condition) and one (otherwise identical) cross-language run using French queries were officially scored. Weighted use of alternative translations yielded an apparent improvement over one-best translation that was not statistically significant, but statistical translation models trained on European Parliament proceedings were found to be poorly matched to this task. Three contrastive runs in which manually generated metadata from the English collection was indexed were also officially scored. Results for Czech were not informative in this first year of that task. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Wang, J., & Oard, D. W. (2007). CLEF-2006 CL-SR at Maryland: English and Czech. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4730 LNCS, pp. 786–793). Springer Verlag. https://doi.org/10.1007/978-3-540-74999-8_99
Mendeley helps you to discover research relevant for your work.