Matching between different image domains

9Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Most of the image registration/matching methods are applicable to images acquired by either identical or similar sensors from various positions. Simpler techniques assume some object space relationship between sensor reference points, such as near parallel image planes, certain overlap and comparable radiometric characteristics. More robust methods allow for larger variations in image orientation and texture, such as the Scale-Invariant Feature Transformation (SIFT), a highly robust technique widely used in computer vision. The use of SIFT, however, is quite limited in mapping so far, mainly, because most of the imagery are acquired from airborne/spaceborne platforms, and, consequently, the image orientation is better known, presenting a less general case for matching. The motivation for this study is to look at the feasibility of a particular case of matching between different image domains. In this investigation, the co-registration of satellite imagery and LiDAR intensity data is addressed. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Toth, C., Ju, H., & Grejner-Brzezinska, D. (2011). Matching between different image domains. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6952 LNCS, pp. 37–47). https://doi.org/10.1007/978-3-642-24393-6_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free