Naive Bayes is a well known and studied algorithm both in statistics and machine learning. Bayesian learning algorithms represent\r each concept with a single probabilistic summary. In this paper we present an iterative approach to naive Bayes. The iterative\r Bayes begins with the distribution tables built by the naive Bayes. Those tables are iteratively updated in order to improve\r the probability class distribution associated with each training example. Experimental evaluation of Iterative Bayes on 25\r benchmark datasets shows consistent gains in accuracy. An interesting side effect of our algorithm is that it shows to be\r robust to attribute dependencies.
CITATION STYLE
Li, M. (2011). Discovery Science:Information Distance and Its Extensions. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 6926(January), 18–28. Retrieved from http://www.mendeley.com/catalog/information-distance-extensions/
Mendeley helps you to discover research relevant for your work.