GPU-based parallel optimization for real-time scale-invariant feature transform in binocular visual registration

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Scale-invariant feature transform (SIFT) is one of the widely used interest point features. It has been successfully applied in various computer vision algorithms like object detection, object tracking, robotic mapping, and large-scale image retrieval. Although SIFT descriptor is highly robust towards scale and rotation variations, the high computational complexity of the SIFT algorithm inhibits its use in applications demanding real-time response and in algorithms dealing with very large-scale databases. In order to be effective for image matching process in near real-time, the Compute Unified Device Architecture (CUDA) application programming interface of a graphics processing unit (GPU) is incorporated to speed up or improve the SIFT method. Experimental results show that the proposed GPU-based SIFT framework is suitable for image application in real time. It can improve the image matching process both in time and accuracy compared with conventional SIFT method.

Cite

CITATION STYLE

APA

Li, J., & Pan, Y. (2019). GPU-based parallel optimization for real-time scale-invariant feature transform in binocular visual registration. Personal and Ubiquitous Computing, 23(3–4), 465–474. https://doi.org/10.1007/s00779-019-01222-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free