Sparse deep stacking network for image classification

42Citations
Citations of this article
49Readers
Mendeley users who have this article in their library.

Abstract

Sparse coding can learn good robust representation to noise and model more higher-order representation for image classification. However, the inference algorithm is computationally expensive even though the supervised signals are used to learn compact and discriminative dictionaries in sparse coding techniques. Luckily, a simplified neural network module (SNNM) has been proposed to directly learn the discriminative dictionaries for avoiding the expensive inference. But the SNNM module ignores the sparse representations. Therefore, we propose a sparse SNNM module by adding the mixed-norm regularization (l1/l2 norm). The sparse SNNM modules are further stacked to build a sparse deep stacking network (S-DSN). In the experiments, we evaluate S-DSN with four databases, including Extended YaleB, AR, 15 scene and Caltech 101. Experimental results show that our model outperforms related classification methods with only a linear classifier. It is worth noting that we reach 98.8% recognition accuracy on 15 scene.

Cite

CITATION STYLE

APA

Li, J., Chang, H., & Yang, J. (2015). Sparse deep stacking network for image classification. In Proceedings of the National Conference on Artificial Intelligence (Vol. 5, pp. 3804–3810). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9786

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free