Terabyte-scale image similarity search: Experience and best practice

19Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

While the past decade has witnessed an unprecedented growth of data generated and collected all over the world, existing data management approaches lack the ability to address the challenges of Big Data. One of the most promising tools for Big Data processing is the MapReduce paradigm. Although it has its limitations, the MapReduce programming model has laid the foundations for answering some of the Big Data challenges. In this paper, we focus on Hadoop, the open-source implementation of the MapReduce paradigm. Using as case-study a Hadoop-based application, i.e., image similarity search, we present our experiences with the Hadoop framework when processing terabytes of data. The scale of the data and the application workload allowed us to test the limits of Hadoop and the efficiency of the tools it provides. We present a wide collection of experiments and the practical lessons we have drawn from our experience with the Hadoop environment. Our findings can be shared as best practices and recommendations to the Big Data researchers and practioners. © 2013 IEEE.

Cite

CITATION STYLE

APA

Moise, D., Shestakov, D., Gudmundsson, G., & Amsaleg, L. (2013). Terabyte-scale image similarity search: Experience and best practice. In Proceedings - 2013 IEEE International Conference on Big Data, Big Data 2013 (pp. 674–682). IEEE Computer Society. https://doi.org/10.1109/BigData.2013.6691637

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free