We present a new approach to learning semantic parsers from multiple datasets, even when the target semantic formalisms are drastically different, and the underlying corpora do not overlap. We handle such "disjoint" data by treating annotations for unobserved formalisms as latent structured variables. Building on state-of-The-Art baselines, we show improvements both in frame-semantic parsing and semantic dependency parsing by modeling them jointly. Our code is open-source and available at https://github.com/ Noahs-ARK/NeurboParser.
CITATION STYLE
Peng, H., Thomson, S., Swayamdipta, S., & Smith, N. A. (2018). Learning joint semantic parsers from disjoint data. In NAACL HLT 2018 - 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies - Proceedings of the Conference (Vol. 1, pp. 1492–1502). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/n18-1135
Mendeley helps you to discover research relevant for your work.