Comparing support vector machines and feed-Forward neural networks with similar parameters

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

From a computational point of view, the main differences between SVMs and FNNs are (1) how the number of elements of their respective solutions (SVM-support vectors/FNN-hidden units) is selected and (2) how the (both hidden-layer and output-layer) weights are found. Sequential FNNs, however, do not show all of these differences with respect to SVMs, since the number of hidden units is obtained as a consequence of the learning process (as for SVMs) rather than fixed a priori. In addition, there exist sequential FNNs where the hidden-layer weights are always a subset of the data, as usual for SVMs. An experimental study on several benchmark data sets, comparing several aspects of SVMs and the aforementioned sequential FNNs, is presented. The experiments were performed in the (as much as possible) same conditions for both models. Accuracies were found to be very similar. Regarding the number of support vectors, sequential FNNs constructed models with less hidden units than SVMs. In addition, all the hidden-layer weights in the FNN models were also considered as support vectors by SVMs. The computational times were lower for SVMs, with absence of numerical problems. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Romero, E., & Toppo, D. (2006). Comparing support vector machines and feed-Forward neural networks with similar parameters. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4224 LNCS, pp. 90–98). Springer Verlag. https://doi.org/10.1007/11875581_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free