Model combination in the multiple-data-batches scenario

14Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The approach of combining models learned from multiple batches of data provide an alternative to the common practice of learning one model from all the available data (i.e., the data combination approach). This paper empirically examines the base-line behaviour of the model combination approach in this multiple-data-batches scenario. We find that model combination can lead to better performance even if the disjoint batches of data are drawn randomly from a larger sample, and relate the relative performance of the two approaches to the learning curve of the classifier used. The practical implication of our results is that one should consider using model combination rather than data combination, especially when multiple batches of data for the same task are readily available. Another interesting result is that we empirically show that the near-asymptotic performance of a single model, in some classification task, can be significantly improved by combining multiple models (derived from the same algorithm) if the constituent models are substantially different and there is some regularity in the models to be exploited by the combination method used. Comparisons with known theoretical results are also provided.

Cite

CITATION STYLE

APA

Ting, K. M., & Low, B. T. (1997). Model combination in the multiple-data-batches scenario. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1224, pp. 250–265). Springer Verlag. https://doi.org/10.1007/3-540-62858-4_90

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free