Query driven hypothesis generation for answering queries over NLP graphs

5Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It has become common to use RDF to store the results of Natural Language Processing (NLP) as a graph of the entities mentioned in the text with the relationships mentioned in the text as links between them. These NLP graphs can be measured with Precision and Recall against a ground truth graph representing what the documents actually say. When asking conjunctive queries on NLP graphs, the Recall of the query is expected to be roughly the product of the Recall of the relations in each conjunct. Since Recall is typically less than one, conjunctive query Recall on NLP graphs degrades geometrically with the number of conjuncts. We present an approach to address this Recall problem by hypothesizing links in the graph that would improve query Recall, and then attempting to find more evidence to support them. Using this approach, we confirm that in the context of answering queries over NLP graphs, we can use lower confidence results from NLP components if they complete a query result. © 2012 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Welty, C., Barker, K., Aroyo, L., & Arora, S. (2012). Query driven hypothesis generation for answering queries over NLP graphs. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 7650 LNCS, pp. 228–242). Springer Verlag. https://doi.org/10.1007/978-3-642-35173-0_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free