Beyond Cross-Validation—Accuracy Estimation for Incremental and Active Learning Models

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

For incremental machine-learning applications it is often important to robustly estimate the system accuracy during training, especially if humans perform the supervised teaching. Cross-validation and interleaved test/train error are here the standard supervised approaches. We propose a novel semi-supervised accuracy estimation approach that clearly outperforms these two methods. We introduce the Configram Estimation (CGEM) approach to predict the accuracy of any classifier that delivers confidences. By calculating classification confidences for unseen samples, it is possible to train an offline regression model, capable of predicting the classifier’s accuracy on novel data in a semi-supervised fashion. We evaluate our method with several diverse classifiers and on analytical and real-world benchmark data sets for both incremental and active learning. The results show that our novel method improves accuracy estimation over standard methods and requires less supervised training data after deployment of the model. We demonstrate the application of our approach to a challenging robot object recognition task, where the human teacher can use our method to judge sufficient training.

Cite

CITATION STYLE

APA

Limberg, C., Wersing, H., & Ritter, H. (2020). Beyond Cross-Validation—Accuracy Estimation for Incremental and Active Learning Models. Machine Learning and Knowledge Extraction, 2(3), 327–346. https://doi.org/10.3390/make2030018

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free