A single layer perceptron approach to selective multi-task learning

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A formal definition of task relatedness to theoretically justify multi-task learning (MTL) improvements has remained quite elusive. The implementation of MTL using multi-layer perceptron (MLP) neural networks evoked the notion of related tasks sharing an underlying representation. This assumption of relatedness can sometimes hurt the training process if tasks are not truly related in that way. In this paper we present a novel single-layer perceptron (SLP) approach to selectively achieve knowledge transfer in a multi-tasking scenario by using a different notion of task relatedness. The experimental results show that the proposed scheme largely outperforms single-task learning (STL) using single layer perceptrons, working in a robust way even when not closely related tasks are present. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Madrid-Sánchez, J., Lázaro-Gredilla, M., & Figueiras-Vidal, A. R. (2007). A single layer perceptron approach to selective multi-task learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4527 LNCS, pp. 272–281). Springer Verlag. https://doi.org/10.1007/978-3-540-73053-8_27

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free