Slower can be faster: The iretis incremental model tree learner

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Incremental learning is useful for processing streaming data, where data elements are produced at a high rate and cannot be stored. An incremental learner typically updates its model with each new instance that arrives. To avoid skipped instances, the model update must finish before the next element arrives, so it should be fast. However, there can be a trade-off between the efficiency of the update and how many updates are needed to get a good model. We investigate this trade-off in the context of model trees.We compare FIMT, a state-of-the-art incremental model tree learner developed for streaming data, with two alternative methods that use a more expensive update method.We find that for data with relatively low (but still realistic) dimensionality, the most expensive method often yields the best learning curve: the system converges faster to a smaller and more accurate model tree.

Cite

CITATION STYLE

APA

Verbeeck, D., & Blockeel, H. (2015). Slower can be faster: The iretis incremental model tree learner. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9385, pp. 322–333). Springer Verlag. https://doi.org/10.1007/978-3-319-24465-5_28

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free