Online learning of aweighted selective naive bayes classifier with non-convex optimization

4Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We study supervised classification for data streams with a high number of input variables. The basic naïve Bayes classifier is attractive for its simplicity and performance when the strong assumption of conditional independence is valid. Variable selection and model averaging are two common ways to improve this model. This process leads to manipulate a weighted naïve Bayes classifier.We focus here on direct estimation of weighted naïve Bayes classifiers. We propose a sparse regularization of the model log-likelihood which takes into account knowledge relative to each input variable. The sparse regularized likelihood being non convex, we propose an online gradient algorithm using mini-batches and random perturbation according to a metaheuristic to avoid local minima. In our experiments, we first study the optimization quality, then the classifier performance under varying its parameterization. The results confirm the effectiveness of our approach.

Cite

CITATION STYLE

APA

Hue, C., Boullé, M., & Lemaire, V. (2017). Online learning of aweighted selective naive bayes classifier with non-convex optimization. In Studies in Computational Intelligence (Vol. 665, pp. 3–17). Springer Verlag. https://doi.org/10.1007/978-3-319-45763-5_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free