Most of the multitask deep learning today use different but correlated tasks to improve their performances by sharing the common features of the tasks. What will happen if we use outlier tasks instead of related tasks? Will they deteriorate the performance? In this paper, we explore the influence of outlier tasks to the multitask deep learning through carefully designed experiments. We compare the accuracies and the convergence rates between the single task convolutional neural network (STCNN) and outlier multitask convolutional neural network (OMTCNN) on facial attribute recognition and hand-written digit recognition. By doing that, we prove that outlier tasks will constrain each other in a multitask network without parameter redundancy and cause a worse performance. We also discover that outlier tasks related to image recognition, like facial attribute recognition and hand-written digit recognition, may not be outlier tasks and have some common features in the bottom layers for the fact that they can use the other one’s first convolutional layer to replace theirs without any accuracy losses.
CITATION STYLE
Cai, S., Fang, Y., & Ma, Z. (2017). Will Outlier Tasks Deteriorate Multitask Deep Learning? In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10635 LNCS, pp. 246–255). Springer Verlag. https://doi.org/10.1007/978-3-319-70096-0_26
Mendeley helps you to discover research relevant for your work.