Lazy overfitting control

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A machine learning model is said overfit the training data relative to a simpler model if the first model is more accurate on the training data but less accurate on the test data. Overfitting control - selecting an appropriate complexity fit - is a central problem in machine learning. Previous overfitting control methods include penalty methods, which penalize a model for complexity, cross-validation methods, which experimentally determine when overfitting occurs on the training data relative to the test data, and ensemble methods, which reduce overfitting risk by combining multiple models. These methods are all eager in that they attempt to control overfitting at training time, and they all attempt to improve the average accuracy, as computed over the test data. This paper presents an overfitting control method which is lazy - it attempts to control overfitting at prediction time for each test case. Our results suggest that lazy methods perform well because they exploit the particulars of each test case at prediction time rather than averaging over all possible test cases at training time. © 2013 Springer-Verlag.

Author supplied keywords

Cite

CITATION STYLE

APA

Prieditis, A., & Sapp, S. (2013). Lazy overfitting control. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7988 LNAI, pp. 481–491). https://doi.org/10.1007/978-3-642-39712-7_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free