Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks

66Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the medical image retrieval and annotation tasks of ImageCLEF 2006. Both tasks are described with respect to goals, databases, topics, results, and techniques. The ImageCLEFmed retrieval task had 12 participating groups (100 runs). Most runs were automatic, with only a few manual or interactive. Purely textual runs were in the majority compared to purely visual runs but most were mixed, using visual and textual information. None of the manual or interactive techniques were significantly better than automatic runs. The bestperforming systems used visual and textual techniques combined, but combinations of visual and textual features often did not improve performance. Purely visual systems only performed well on visual topics. The medical automatic annotation used a larger database of 10,000 training images from 116 classes, up from 9,000 images from 57 classes in 2005. Twelve groups submitted 28 runs. Despite the larger number of classes, results were almost as good as in 2005 which demonstrates a clear improvement in performance. The best system of 2005 would have received a position in the middle in 2006. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Müller, H., Deselaers, T., Deserno, T., Clough, P., Kim, E., & Hersh, W. (2007). Overview of the ImageCLEFmed 2006 medical retrieval and medical annotation tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4730 LNCS, pp. 595–608). Springer Verlag. https://doi.org/10.1007/978-3-540-74999-8_72

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free