Multi-agent systems have been attacking the challenges of distributed information retrieval. In this paper, we propose a consensus method-based frame-work to evaluate the performance of cooperative information retrieval tasks of the agents. Two well-known measurements, precision and recall, are extended to handle consensual closeness (i.e., local and global consensus) between the retrieved results. We show in a motivating example that the proposed criteria are prone to solve the problem of rigidity of classical precision and recall. More importantly, the retrieved results can be ranked with respect to the consensual score. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Jung, J. J., & Jo, G. S. (2007). Consensus-based evaluation framework for cooperative information retrieval systems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4496 LNAI, pp. 169–178). Springer Verlag. https://doi.org/10.1007/978-3-540-72830-6_18
Mendeley helps you to discover research relevant for your work.