A formal definition of task relatedness to theoretically justify multi-task learning (MTL) improvements has remained quite elusive. The implementation of MTL using multi-layer perceptron (MLP) neural networks evoked the notion of related tasks sharing an underlying representation. This assumption of relatedness can sometimes hurt the training process if tasks are not truly related in that way. In this paper we present a novel single-layer perceptron (SLP) approach to selectively achieve knowledge transfer in a multi-tasking scenario by using a different notion of task relatedness. The experimental results show that the proposed scheme largely outperforms single-task learning (STL) using single layer perceptrons, working in a robust way even when not closely related tasks are present. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Madrid-Sánchez, J., Lázaro-Gredilla, M., & Figueiras-Vidal, A. R. (2007). A single layer perceptron approach to selective multi-task learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4527 LNCS, pp. 272–281). Springer Verlag. https://doi.org/10.1007/978-3-540-73053-8_27
Mendeley helps you to discover research relevant for your work.