Abstract
Background: Any family of learning machines can be combined into a single learning machine using various methods with myriad degrees of usefulness. Results: For making predictions on an outcome, it is provably at least as good as the best machine in the family, given sufficient data. And if one machine in the family minimizes the probability of misclassification, in the limit of large data, then Optimal Crowd does also. That is, the Optimal Crowd is asymptotically Bayes optimal if any machine in the crowd is such. Conclusions: The only assumption needed for proving optimality is that the outcome variable is bounded. The scheme is illustrated using real-world data from the UCI machine learning site, and possible extensions are proposed.
Cite
CITATION STYLE
Battogtokh, B., Mojirsheibani, M., & Malley, J. (2017). The optimal crowd learning machine. BioData Mining, 10(1). https://doi.org/10.1186/s13040-017-0135-7
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.