Empirical Comparison of Cross-Validation and Test Data on Internet Traffic Classification Methods

4Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we compare two validation methods that are used to estimate the performance of classification algorithms in a non-problem-specific knowledge scenario. One way to measure the performance of a classification algorithm is to determine its prediction error rate. However, this value cannot be calculated but estimated. In this work, we apply and compare two common methods used for estimation namely: test data and cross-validation. Precisely, we analyze and compare the statistical properties of the K-fold cross-validation and test data estimators of the prediction error rates of six classifiers namely; Naïve Bayes, KNN, Random Forest, SVM, J48, and OneR. From the study, the statistical property of repeated cross-validation tends to stabilize the prediction error estimation which in turn reduces the variance of the prediction error estimator when compared with test data. The NIMS dataset collected over a network was employed in the experimental study.

Cite

CITATION STYLE

APA

Jonathan, O., Omoregbe, N., & Misra, S. (2019). Empirical Comparison of Cross-Validation and Test Data on Internet Traffic Classification Methods. In Journal of Physics: Conference Series (Vol. 1299). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1299/1/012044

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free