In all previous work on deep multi-task learning we are aware of, all task supervisions are on the same (outermost) layer. We present a multi-task learning architecture with deep bi-directional RNNs, where different tasks supervision can happen at different layers. We present experiments in syntactic chunking and CCG supertagging, coupled with the additional task of POS-tagging. We show that it is consistently better to have POS supervision at the innermost rather than the outermost layer. We argue that this is because "lowlevel" tasks are better kept at the lower layers, enabling the higher-level tasks to make use of the shared representation of the lower-level tasks. Finally, we also show how this architecture can be used for domain adaptation.
CITATION STYLE
Søgaard, A., & Goldberg, Y. (2016). Deep multi-task learning with low level tasks supervised at lower layers. In 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016 - Short Papers (pp. 231–235). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/p16-2038
Mendeley helps you to discover research relevant for your work.