Automatic image annotation aims at labeling images with keywords. In this paper we investigate three annotation benchmark tasks used in literature to evaluate annotation systems' performance. We empirically compare the first two of the tasks, the 5000 Corel images and the Corel categories tasks, by applying a family of annotation system configurations derived from our PicSOM image content analysis framework. We establish an empirical correspondence of performance levels in the tasks by studying the performance of our system configurations, along with figures presented in literature. We also consider ImageCLEF 2006 Object Annotation Task that has earlier been found difficult. By experimenting with the data, we gain insight into the reasons that make the ImageCLEF task difficult. In the course of our experiments, we demonstrate that in these three tasks the PicSOM system-based on fusion of numerous global image features-outperforms the other considered annotation methods. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Viitaniemi, V., & Laaksonen, J. (2007). Empirical investigations on benchmark tasks for automatic image annotation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4781 LNCS, pp. 93–104). Springer Verlag. https://doi.org/10.1007/978-3-540-76414-4_10
Mendeley helps you to discover research relevant for your work.