Multi-Document Answer Generation for Non-Factoid Questions

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The current research will be devoted to the challenging and under-investigated task of multi-source answer generation for complex non-factoid questions. We will start with experimenting with generative models on one particular type of non-factoid questions-instrumental/procedural questions which often start with "how-to". For this, a new dataset, comprised of more than 100,000 QA-pairs which were crawled from a dedicated web-resource where each answer has a set of references to the articles it was written upon, will be used. We will also compare different ways of model evaluation to choose a metric which better correlates with human assessment. To be able to do this, the way people evaluate answers to non-factoid questions and set some formal criteria of what makes a good quality answer is needed to be understood. Eye-tracking and crowdsourcing methods will be employed to study how users interact with answers and evaluate them, and how the answer features correlate with task complexity. We hope that our research will help to redefine the way users interact and work with search engines so as to transform IR finally into the answer retrieval systems that users have always desired.

Cite

CITATION STYLE

APA

Baranova-Bolotova, V. (2020). Multi-Document Answer Generation for Non-Factoid Questions. In SIGIR 2020 - Proceedings of the 43rd International ACM SIGIR Conference on Research and Development in Information Retrieval (p. 2477). Association for Computing Machinery, Inc. https://doi.org/10.1145/3397271.3401449

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free