Content Based Image Retrieval Using Local Feature Descriptors on Hadoop for Indoor Navigation

2Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper demonstrates Content Based Image Retrieval (CBIR) algorithms implementation on a huge image set. Such implementation will be used to match query images to previously stored geotagged image database for the purpose of vision based indoor navigation. Feature extraction and matching are demonstrated using the two famous key-point detection CBIR algorithms: Scale Invariant Feature Transformation (SIFT) and Speeded Up Robust Features (SURF). The key-points matching results using Brute Force and FLANN (Fast Library for Approximate Nearest Neighbors) on various levels for both SIFT and SURF algorithms are compared herein. The algorithms are implemented on Hadoop MapReduce framework integrated with Hadoop Image Processing Interface (HIPI) and Open Computer Vision Library (OpenCV). As a result, the experiments shown that using SIFT with KNN (4, 5, and 6) levels give the highest matching accuracy in comparison to the other methods.

Cite

CITATION STYLE

APA

Gaber, H., Marey, M., Amin, S., Shedeed, H., & Tolba, M. F. (2019). Content Based Image Retrieval Using Local Feature Descriptors on Hadoop for Indoor Navigation. In Advances in Intelligent Systems and Computing (Vol. 845, pp. 614–623). Springer Verlag. https://doi.org/10.1007/978-3-319-99010-1_56

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free