Evaluating semantic parsing against a simple web-based question answering model

14Citations
Citations of this article
103Readers
Mendeley users who have this article in their library.

Abstract

Semantic parsing shines at analyzing complex natural language that involves composition and computation over multiple pieces of evidence. However, datasets for semantic parsing contain many factoid questions that can be answered from a single web document. In this paper, we propose to evaluate semantic parsing-based question answering models by comparing them to a question answering baseline that queries the web and extracts the answer only from web snippets, without access to the target knowledge-base. We investigate this approach on COMPLEXQUESTIONS, a dataset designed to focus on compositional language, and find that our model obtains reasonable performance (∼35 F1 compared to 41 F1 of state-of-the-art). We find in our analysis that our model performs well on complex questions involving conjunctions, but struggles on questions that involve relation composition and superlatives.

Cite

CITATION STYLE

APA

Talmor, A., Geva, M., & Berant, J. (2017). Evaluating semantic parsing against a simple web-based question answering model. In *SEM 2017 - 6th Joint Conference on Lexical and Computational Semantics, Proceedings (pp. 161–167). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/s17-1020

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free