Classification using Hierarchical Naïve Bayes models

73Citations
Citations of this article
136Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Classification problems have a long history in the machine learning literature. One of the simplest, and yet most consistently well-performing set of classifiers is the Naïve Bayes models. However, an inherent problem with these classifiers is the assumption that all attributes used to describe an instance are conditionally independent given the class of that instance. When this assumption is violated (which is often the case in practice) it can reduce classification accuracy due to "information double-counting" and interaction omission. In this paper we focus on a relatively new set of models, termed Hierarchical Naïve Bayes models. Hierarchical Naïve Bayes models extend the modeling flexibility of Naïve Bayes models by introducing latent variables to relax some of the independence statements in these models. We propose a simple algorithm for learning Hierarchical Naïve Bayes models in the context of classification. Experimental results show that the learned models can significantly improve classification accuracy as compared to other frameworks.

Cite

CITATION STYLE

APA

Langseth, H., & Nielsen, T. D. (2006). Classification using Hierarchical Naïve Bayes models. Machine Learning, 63(2), 135–159. https://doi.org/10.1007/s10994-006-6136-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free