Neural network for heterogeneous annotations

26Citations
Citations of this article
86Readers
Mendeley users who have this article in their library.

Abstract

Multiple treebanks annotated under heterogeneous standards give rise to the research question of best utilizing multiple resources for improving statistical models. Prior research has focused on discrete models, leveraging stacking and multi-view learning to address the problem. In this paper, we empirically investigate heterogeneous annotations using neural network models, building a neural network counterpart to discrete stacking and multi-view learning, respectively, finding that neural models have their unique advantages thanks to the freedom from manual feature engineering. Neural model achieves not only better accuracy improvements, but also an order of magnitude faster speed compared to its discrete baseline, adding little time cost compared to a neural model trained on a single treebank.

Cite

CITATION STYLE

APA

Chen, H., Zhang, Y., & Liu, Q. (2016). Neural network for heterogeneous annotations. In EMNLP 2016 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 731–741). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d16-1070

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free