Learning how to extract rotation-invariant and scale-invariant features from texture images

27Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Learning how to extract texture features from noncontrolled environments characterized by distorted images is a still-open task. By using a new rotation-invariant and scale-invariant image descriptor based on steerable pyramid decomposition, and a novel multiclass recognition method based on optimum-path forest, a new texture recognition system is proposed. By combining the discriminating power of our image descriptor and classifier, our system uses small-size feature vectors to characterize texture images without compromising overall classification rates. State-of-the-art recognition results are further presented on the Brodatz data set. High classification rates demonstrate the superiority of the proposed system.

Cite

CITATION STYLE

APA

Montoya-Zegarra, J. A., Papa, J. P., Leite, N. J., Da Silva Torres, R., & Falc̃o, A. X. (2008). Learning how to extract rotation-invariant and scale-invariant features from texture images. Eurasip Journal on Advances in Signal Processing, 2008. https://doi.org/10.1155/2008/691924

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free