Abstract
Distant supervision, a paradigm of relation extraction where training data is created by aligning facts in a database with a large unannotated corpus, is an attractive approach for training relation extractors. Various models are proposed in recent literature to align the facts in the database to their mentions in the corpus. In this paper, we discuss and critically analyse a popular alignment strategy called the "at least one" heuristic. We provide a simple, yet effective relaxation to this strategy. We formulate the inference procedures in training as integer linear programming (ILP) problems and implement the relaxation to the "at least one " heuristic via a soft constraint in this formulation. Empirically, we demonstrate that this simple strategy leads to a better performance under certain settings over the existing approaches.
Cite
CITATION STYLE
Nagesh, A., Haffari, G., & Ramakrishnan, G. (2014). Noisy Or-based model for relation extraction using distant supervision. In EMNLP 2014 - 2014 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 1937–1941). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/d14-1208
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.