Refining Automatically Extracted Knowledge Bases Using Crowdsourcing

1Citations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Machine-constructed knowledge bases often contain noisy and inaccurate facts. There exists significant work in developing automated algorithms for knowledge base refinement. Automated approaches improve the quality of knowledge bases but are far from perfect. In this paper, we leverage crowdsourcing to improve the quality of automatically extracted knowledge bases. As human labelling is costly, an important research challenge is how we can use limited human resources to maximize the quality improvement for a knowledge base. To address this problem, we first introduce a concept of semantic constraints that can be used to detect potential errors and do inference among candidate facts. Then, based on semantic constraints, we propose rank-based and graph-based algorithms for crowdsourced knowledge refining, which judiciously select the most beneficial candidate facts to conduct crowdsourcing and prune unnecessary questions. Our experiments show that our method improves the quality of knowledge bases significantly and outperforms state-of-the-art automatic methods under a reasonable crowdsourcing cost.

Cite

CITATION STYLE

APA

Li, C., Zhao, P., Sheng, V. S., Xian, X., Wu, J., & Cui, Z. (2017). Refining Automatically Extracted Knowledge Bases Using Crowdsourcing. Computational Intelligence and Neuroscience, 2017. https://doi.org/10.1155/2017/4092135

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free