Feature subset selection in an ICA space

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Given a number of samples possibly belonging to different classes, we say these samples live in an ICA space if all their class-conditional distributions are separable and thus can be expressed as a product of unidimensional distributions. Since this hypothesis is unfrequent on real-world problems we also provide a framework through class-conditional Independent Component Analysis (CC-ICA) where it can be held on stronger grounds. For this representation, we focus on the problem of feature subset selection for classification, observing that divergence arises as a simple and natural criterion for class separability. Since divergence is monotonic on the dimensionality, optimality can be ensured without the need for an exhaustive search for features. We adapt the Bayes decision scheme to our independence assumptions and framework. A first experiment on Trunk's artificial dataset, where class-conditional independence is already known, illustrates the robustness and accuracy of our technique. A second experiment, on the UCI letter database, evaluates the importance of the representation when assuming independence. A third experiment on the Corel database illustrates the performance of our criterion on high dimensional data.

Cite

CITATION STYLE

APA

Bressan, M., & Vitrià, J. (2002). Feature subset selection in an ICA space. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2504, pp. 196–206). Springer Verlag. https://doi.org/10.1007/3-540-36079-4_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free