Abstract
Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set. Thus DDFA features are inherently discriminative from the start even though they are trained without knowledge of the ultimate classification problem. Interestingly, DDFA also continues to add new features indefinitely (so it does not depend on a hidden layer size), is not based on minimizing error, and is inherently divergent instead of convergent, thereby providing a unique direction of research for unsupervised feature learning. In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features. Copyright (c) 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
Cite
CITATION STYLE
Szerlip, P. A., Morse, G., Pugh, J. K., & Stanley, K. O. (2015). Unsupervised feature learning through divergent discriminative feature accumulation. In Proceedings of the National Conference on Artificial Intelligence (Vol. 4, pp. 2979–2985). AI Access Foundation. https://doi.org/10.1609/aaai.v29i1.9601
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.