Bias Modeling for Distantly Supervised Relation Extraction

Citations of this article
Mendeley users who have this article in their library.


Distant supervision (DS) automatically annotates free text with relation mentions from existing knowledge bases (KBs), providing a way to alleviate the problem of insufficient training data for relation extraction in natural language processing (NLP). However, the heuristic annotation process does not guarantee the correctness of the generated labels, promoting a hot research issue on how to efficiently make use of the noisy training data. In this paper, we model two types of biases to reduce noise: (1) bias-dist to model the relative distance between points (instances) and classes (relation centers); (2) bias-reward to model the possibility of each heuristically generated label being incorrect. Based on the biases, we propose three noise tolerant models: MIML-dist, MIML-dist-classify, and MIML-reward, building on top of a state-of-the-art distantly supervised learning algorithm. Experimental evaluations compared with three landmark methods on the KBP dataset validate the effectiveness of the proposed methods.




Xiang, Y., Zhang, Y., Wang, X., Qin, Y., & Han, W. (2015). Bias Modeling for Distantly Supervised Relation Extraction. Mathematical Problems in Engineering, 2015.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free