In this paper we present a scalable and distributed system for image retrieval based on visual features and annotated text. This system is the core of the SAPIR project. Its architecture makes use of Peer-to-Peer networks to achieve scalability and efficiency allowing the management of huge amount of data. For the presented demo we use 10 million images and accompanying text (tags, comments, etc.) taken from Flickr. Through the web interface it is possible to efficient perform contentbased similarity search, as well as traditional text search on the metadata annotated by the Flickr community. Fast complex query processing is also possible combining visual features and text. We show that the combination of content-based and text search on a large scale can dramatically improve the capability of a multimedia search system to answer the users needs and that the Peer-to-Peer based architecture can cope with the scalability issues (response time obtained for this demo over 10 million images is always below 500 milliseconds).
Falchi, F., Kacimi, M., Mass, Y., Rabitti, F., & Zezula, P. (2007). SAPIR: Scalable and distributed image searching. In CEUR Workshop Proceedings (Vol. 300, pp. 11–12).