SAPIR: Scalable and distributed image searching

ISSN: 16130073
5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In this paper we present a scalable and distributed system for image retrieval based on visual features and annotated text. This system is the core of the SAPIR project. Its architecture makes use of Peer-to-Peer networks to achieve scalability and efficiency allowing the management of huge amount of data. For the presented demo we use 10 million images and accompanying text (tags, comments, etc.) taken from Flickr. Through the web interface it is possible to efficient perform contentbased similarity search, as well as traditional text search on the metadata annotated by the Flickr community. Fast complex query processing is also possible combining visual features and text. We show that the combination of content-based and text search on a large scale can dramatically improve the capability of a multimedia search system to answer the users needs and that the Peer-to-Peer based architecture can cope with the scalability issues (response time obtained for this demo over 10 million images is always below 500 milliseconds).

Cite

CITATION STYLE

APA

Falchi, F., Kacimi, M., Mass, Y., Rabitti, F., & Zezula, P. (2007). SAPIR: Scalable and distributed image searching. In CEUR Workshop Proceedings (Vol. 300, pp. 11–12).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free