Incremental learning and forgetting in RBF networks and SVMs with applications to financial problems

5Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Radial Basis Function Networks (RBFNs) have been widely applied to practical classification problems. In recent years, Support Vector Machines (SVMs) are gaining much popularity as promising methods for classification problems. This paper compares those two methods in view of incremental learning and forgetting. The authors have reported that the incremental learning and active forgetting in RBFNs provide a good performance for classification under the changeable environment. First in this paper, a method for incremental learning and forgetting in SVMs is proposed. Next, a comparative simulation for a portfolio problems between RBFNs and SVMs will be made.

Cite

CITATION STYLE

APA

Nakayama, H., & Hattori, A. (2003). Incremental learning and forgetting in RBF networks and SVMs with applications to financial problems. In Lecture Notes in Artificial Intelligence (Subseries of Lecture Notes in Computer Science) (Vol. 2773 PART 1, pp. 1109–1115). Springer Verlag. https://doi.org/10.1007/978-3-540-45224-9_149

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free