Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multi-view learning, respectively, finding that neural models have their unique advantages thanks to the freedom from manual feature engineering. Neural model achieves not only better accuracy improvements, but also an order of magnitude faster speed compared to its discrete baseline, adding little time cost compared to a neural model trained on a single treebank.
CITATION STYLE
Chen, H., Zhang, Y., & Liu, Q. (2016). Neural network for heterogeneous annotations. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 731–741). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1070
Mendeley helps you to discover research relevant for your work.