Abstract
In this paper, we compare two validation methods that are used to estimate the performance of classification algorithms in a non-problem-specific knowledge scenario. One way to measure the performance of a classification algorithm is to determine its prediction error rate. However, this value cannot be calculated but estimated. In this work, we apply and compare two common methods used for estimation namely: test data and cross-validation. Precisely, we analyze and compare the statistical properties of the K-fold cross-validation and test data estimators of the prediction error rates of six classifiers namely; Naïve Bayes, KNN, Random Forest, SVM, J48, and OneR. From the study, the statistical property of repeated cross-validation tends to stabilize the prediction error estimation which in turn reduces the variance of the prediction error estimator when compared with test data. The NIMS dataset collected over a network was employed in the experimental study.
Author supplied keywords
Cite
CITATION STYLE
Jonathan, O., Omoregbe, N., & Misra, S. (2019). Empirical Comparison of Cross-Validation and Test Data on Internet Traffic Classification Methods. In Journal of Physics: Conference Series (Vol. 1299). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1299/1/012044
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.