ML vs DL: Accuracy and Testing Runtime Trade-offs in BCI

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Trade-offs between accuracy and runtime are a common phenomenon in the field of computer science, but particularly pose a challenge for online Brain-computer interface (BCI) applications, such as control interfaces for paralysis patients. However, research evaluating the testing runtime of various classifiers is extremely limited. In this study, we assess trade-offs between accuracy and runtime (total and testing) of the classifiers benchmarked in Kastrati et al.’s introduction of EEGEyeNet, as well as sLDA, logistic regression, variants of SVM, and a shallow CNN. For simple BCI tasks requiring binary classification, we find that both simple and ensemble ML algorithms, especially tree-based models, can achieve accuracies comparable to DL networks’ while achieving remarkably faster total and testing runtimes. Namely, DecisionTree, RandomForest, and GradientBoost were particularly impressive, and we consider these highly efficient classifiers to be promising machine learning alternatives to slower deep learning classifiers such as CNN in binary motor imagery classification.

Cite

CITATION STYLE

APA

Tuvshinjargal, A., & Kim, E. (2022). ML vs DL: Accuracy and Testing Runtime Trade-offs in BCI. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13519 LNCS, pp. 497–511). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-17618-0_35

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free