We introduce a new formal model in which a learning algorithm must combine a collection of potentially poor but statistically independent hypothesis functions in order to approximate an unknown target function arbitrarily well. Our motivation includes the question of how to make optimal use of multiple independent runs of a mediocre learning algorithm, as well as settings in which the many hypotheses are obtained by a distributed population of identical learning agents.
CITATION STYLE
Kearns, M., & Seung, H. S. (1995). Learning from a population of hypotheses. Machine Learning, 18(2–3), 255–276. https://doi.org/10.1007/bf00993412
Mendeley helps you to discover research relevant for your work.