In this paper, we propose a new multitask learning (MTL) model which can learn a series of multi-class pattern recognition prob- lems stably. The knowledge transfer in the proposed MTL model is implemented by the following mechanisms: (1) transfer by sharing the internal representation of RBFs and (2) transfer of the information on class subregions from the related tasks. The proposed model can detect task changes on its own based on the output errors even though no task information is given by the environment. It also learn training samples of different tasks that are given one after another. In the experiments, the recognition performance is evaluated for the eight MTPR problems which are defined from the four UCI data sets. The experimental results demonstrate that the proposed MTL model outperforms a single-task learning model in terms of the final classification accuracy. Furthermore, we show that the transfer of class subregion contributes to enhancing the generalization performance of a new task with less training samples. © 2009 Springer Berlin Heidelberg.
CITATION STYLE
Nishikawa, H., Ozawa, S., & Roy, A. (2009). A neural network model for sequential multitask pattern recognition problems. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 5506 LNCS, pp. 821–828). https://doi.org/10.1007/978-3-642-02490-0_100
Mendeley helps you to discover research relevant for your work.