Photographic Image Retrieval

  • Lestari Paramita M
  • Grubinger M
N/ACitations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

CLEF 1 was the first benchmarking campaign that organized an evalua-tion event for image retrieval: the ImageCLEF photographic ad hoc retrieval task in 2003. Since then, this task has become one of the most popular tasks of ImageCLEF, providing both the resources and a framework necessary to carry out comparative laboratory–style evaluation of multi–lingual visual information retrieval from pho-tographic collections. Running for seven years, several challenges have been given to participants, including: retrieval from a collection of historic photographs; re-trieval from a more generic collection with multi–lingual annotations; and retrieval from a large news archive, promoting result diversity. This chapter summarizes each of these tasks, describes the individual test collections and evaluation scenarios, an-alyzes the retrieval results, and discusses potential findings for a number of research questions. 8.1 Introduction At the turn of the millennium, several calls (Goodrum, 2000; Leung and Ip, 2000) were made to develop a standardized test collection for Visual Information Retrieval (VIR). In 2003, ImageCLEF 2 was the first evaluation event to answer these calls by providing a benchmark suite comprising an image collection, query topics, rele-vance assessments and performance measures for cross–language image retrieval, which encompasses two main domains of VIR: (1) image retrieval, and (2) Cross– Language Information Retrieval (CLIR).

Cite

CITATION STYLE

APA

Lestari Paramita, M., & Grubinger, M. (2010). Photographic Image Retrieval (pp. 141–162). https://doi.org/10.1007/978-3-642-15181-1_8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free