Consensus-based evaluation framework for cooperative information retrieval systems

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-agent systems have been attacking the challenges of distributed information retrieval. In this paper, we propose a consensus method-based frame-work to evaluate the performance of cooperative information retrieval tasks of the agents. Two well-known measurements, precision and recall, are extended to handle consensual closeness (i.e., local and global consensus) between the retrieved results. We show in a motivating example that the proposed criteria are prone to solve the problem of rigidity of classical precision and recall. More importantly, the retrieved results can be ranked with respect to the consensual score. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Jung, J. J., & Jo, G. S. (2007). Consensus-based evaluation framework for cooperative information retrieval systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4496 LNAI, pp. 169–178). Springer Verlag. https://doi.org/10.1007/978-3-540-72830-6_18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free