While the past decade has witnessed an unprecedented growth of data generated and collected all over the world, existing data management approaches lack the ability to address the challenges of Big Data. One of the most promising tools for Big Data processing is the MapReduce paradigm. Although it has its limitations, the MapReduce programming model has laid the foundations for answering some of the Big Data challenges. In this paper, we focus on Hadoop, the open-source implementation of the MapReduce paradigm. Using as case-study a Hadoop-based application, i.e., image similarity search, we present our experiences with the Hadoop framework when processing terabytes of data. The scale of the data and the application workload allowed us to test the limits of Hadoop and the efficiency of the tools it provides. We present a wide collection of experiments and the practical lessons we have drawn from our experience with the Hadoop environment. Our findings can be shared as best practices and recommendations to the Big Data researchers and practioners. © 2013 IEEE.
CITATION STYLE
Moise, D., Shestakov, D., Gudmundsson, G., & Amsaleg, L. (2013). Terabyte-scale image similarity search: Experience and best practice. In Proceedings - 2013 IEEE International Conference on Big Data, Big Data 2013 (pp. 674–682). IEEE Computer Society. https://doi.org/10.1109/BigData.2013.6691637
Mendeley helps you to discover research relevant for your work.