Abstract
The effectiveness of Bayes methods and their generalization called the aggregating strategy has been demonstrated in statistics, game theory, learning theory, etc. They often, however, suffer from computational difficulties in implementation since the exact numerical values of the Bayes posterior probabilities and the Bayes posterior means (or corresponding quantities in the aggregating strategy) are not always analytically or computationally tractable and must be approximately computed. This paper introduces methods of efficient approximation of Bayes methods (aggregating strategy) and demonstrates their effectiveness in the on-line prediction and discrimination scenarios. The algorithms introduced use randomizing techniques based on the Markov chain Monte Carlo method, which has extensively been explored in the context of computational statistics and statistical mechanics. We give a rigorous analysis with regard to 1) how well the algorithms can approximate true Bayes methods (aggregating strategy) and 2) how efficiently they work, both for the prediction and discrimination scenarios. The trade-off between the issues 1) and 2) is analyzed through the number of random samplings that the algorithms make.
Cite
CITATION STYLE
Yamanishi, K. (1995). Randomized approximate aggregating strategies and their applications to prediction and discrimination. In Proceedings of the 8th Annual Conference on Computational Learning Theory, COLT 1995 (Vol. 1995-January, pp. 83–90). Association for Computing Machinery. https://doi.org/10.1145/225298.225308
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.