Effectively creating weakly labeled training examples via approximate domain knowledge

2Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the challenges to information extraction is the requirement of human annotated examples, commonly called goldstandard examples. Many successful approaches alleviate this problem by employing some form of distant supervision, i.e., look into knowledge bases such as Freebase as a source of supervision to create more examples. While this is perfectly reasonable, most distant supervision methods rely on a hand-coded background knowledge that explicitly looks for patterns in text. For example, they assume all sentences containing Person X and Person Y are positive examples of the relation married(X, Y). In this work, we take a different approach – we infer weakly supervised examples for relations from models learned by using knowledge outside the natural language task. We argue that this method creates more robust examples that are particularly useful when learning the entire information-extraction model (the structure and parameters). We demonstrate on three domains that this form of weak supervision yields superior results when learning structure compared to using distant supervision labels or a smaller set of gold-standard labels.

Cite

CITATION STYLE

APA

Natarajan, S., Picado, J., Khot, T., Kersting, K., Re, C., & Shavlik, J. (2015). Effectively creating weakly labeled training examples via approximate domain knowledge. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9046, pp. 92–107). Springer Verlag. https://doi.org/10.1007/978-3-319-23708-4_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free