Océ at CLEF 2003

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This report describes the work done at Océ Research for the CrossLanguage Evaluation Forum (CLEF) 2003. This year we participated in seven mono-lingual tasks (all languages except Russian). We developed a generic probabilistic model that does not make use of global statistics from a document collection to rank documents. The relevance of a document to a given query is calculated using the term frequencies of the query terms in the document and the length of the document. We used the BM25 model, our new probabilistic model and (for Dutch only) a statistical model to rank documents. Our main goals were to compare the BM25 model and our probabilistic model, and to evaluate the performance of a statistical model that uses 'knowledge' from relevance assessments from previous years. Furthermore, we give some comments on the standard performance measures used in the CLEF. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Brand, R., Brünner, M., Driessen, S., Iljin, P., & Klok, J. (2004). Océ at CLEF 2003. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3237, 301–309. https://doi.org/10.1007/978-3-540-30222-3_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free