In this paper, we study the problem of large-scale Kernel Logistic Regression (KLR). A straightforward approach is to apply stochastic approximation to KLR. We refer to this approach as non-conservative online learning algorithm because it updates the kernel classifier after every received training example, leading to a dense classifier. To improve the sparsity of the KLR classifier, we propose two conservative online learning algorithms that update the classifier in a stochastic manner and generate sparse solutions. With appropriately designed updating strategies, our analysis shows that the two conservative algorithms enjoy similar theoretical guarantee as that of the non-conservative algorithm. Empirical studies on several benchmark data sets demonstrate that compared to batch-mode algorithms for KLR, the proposed conservative online learning algorithms are able to produce sparse KLR classifiers, and achieve similar classification accuracy but with significantly shorter training time. Furthermore, both the sparsity and classification accuracy of our methods are comparable to those of the online kernel SVM.
Mendeley helps you to discover research relevant for your work.
CITATION STYLE
Zhang, L., Jin, R., Chen, C., Bu, J., & He, X. (2012). Efficient Online Learning for Large-Scale Sparse Kernel Logistic Regression. In Proceedings of the 26th AAAI Conference on Artificial Intelligence, AAAI 2012 (pp. 1219–1225). AAAI Press. https://doi.org/10.1609/aaai.v26i1.8300