Gradient-based descriptors have proven successful in a wide variety of applications. Their standard implementations usually assume that the input images have been acquired using classic perspective cameras. In practice many real-world systems make use of wide angle cameras which allow to obtain wider Fields of View (FOV) but introduce radial distortion which breaks the rectilinear assumption. The most straightforward way to overcome such a problem is to compensate the distortion by unwarping the original image prior to computing the descriptor. The rectification process, however, is computationally expansive and introduces artefacts which can deceive the subsequent analysis (e.g., feature matching). We propose the Distortion Adaptive Descriptors (DAD), a new paradigm to correctly compute local descriptors directly in the distorted domain. We combine the DAD with existing techniques to correctly estimate the gradient of distorted images and hence derive a set of SIFT and HOG-based descriptors. Experiments show that the DAD paradigm allows to improve the matching ability of the SIFT and HOG descriptors when they are computed directly in the distorted domain.
CITATION STYLE
Furnari, A., Farinella, G. M., Bruna, A. R., & Battiato, S. (2015). Distortion adaptive descriptors: Extending gradient-based descriptors to wide angle images. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9280, pp. 205–215). Springer Verlag. https://doi.org/10.1007/978-3-319-23234-8_20
Mendeley helps you to discover research relevant for your work.