Baseline results for the ImageCLEF 2006 medical automatic annotation task

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The ImageCLEF 2006 medical automatic annotation task encompasses 11,000 images from 116 categories, compared to 57 categories for 10,000 images of the similar task in 2005. As a baseline for comparison, a run using the same classifiers with the identical parameterization as in 2005 is submitted. In addition, the parameterization of the classifier was optimized according to the 9,000/1,000 split of the 2006 training data. In particular, texture-based classifiers are combined in parallel with classifiers, which use spatial intensity information to model common variabilities among medical images. However, all individual classifiers are based on global features, i.e. one feature vector describes the entire image. The parameterization from 2005 yields an error rate of 21.7%, which ranks 13th among the 28 submissions. The optimized classifier yields 21.4% error rate (rank 12), which is insignificantly better. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Güld, M. O., Thies, C., Fischer, B., & Deserno, T. M. (2007). Baseline results for the ImageCLEF 2006 medical automatic annotation task. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4730 LNCS, pp. 686–689). Springer Verlag. https://doi.org/10.1007/978-3-540-74999-8_84

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free