Remotely sensed hyperspectral scenes are typically defined by large area coverage and hundreds of spectral bands. Those characteristics imply smooth transitions in the spectral-spatio domains. As consequence, subtle differences in the scene are evidenced, benefiting precision applications, but values in neighboring locations and wavelengths are highly correlated. Nondiagonal covariance matrices and wide autocorrelation functions can be observed this way, implying increased intraclass and decreased interclass variation, in both spectral and spatial domains. This leads to lower interpretation accuracies and makes it reasonable to investigate if hyperspectral imagery suffer from Curse of Dimensionality. Moreover, as this Curse can compromise linear method’s Euclidean behavior assumption, it is relevant to compare linear and nonlinear dimensionality reduction performance. So, in this work we verify these two aspects empirically using multiple nonparametric statistical comparisons of Gaussian Mixture Model clustering performances in the cases of: absence, linear and nonlinear unsupervised feature extraction. Experimental results indicate Curse of Dimensionality presence and nonlinear adequacy.
CITATION STYLE
Nakao, E. K., & Levada, A. L. M. (2020). Unsupervised Learning and Feature Extraction in Hyperspectral Imagery. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12249 LNCS, pp. 792–806). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58799-4_57
Mendeley helps you to discover research relevant for your work.