Fine-Grained Relevance Annotations for Multi-Task Document Ranking and Question Answering

6Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There are many existing retrieval and question answering datasets. However, most of them either focus on ranked list evaluation or single-candidate question answering. This divide makes it challenging to properly evaluate approaches concerned with ranking documents and providing snippets or answers for a given query. In this work, we present FiRA: a novel dataset of Fine-Grained Relevance Annotations. We extend the ranked retrieval annotations of the Deep Learning track of TREC 2019 with passage and word level graded relevance annotations for all relevant documents. We use our newly created data to study the distribution of relevance in long documents, as well as the attention of annotators to specific positions of the text. As an example, we evaluate the recently introduced TKL document ranking model. We find that although TKL exhibits state-of-the-art retrieval results for long documents, it misses many relevant passages.

Cite

CITATION STYLE

APA

Hofstätter, S., Zlabinger, M., Sertkan, M., Schröder, M., & Hanbury, A. (2020). Fine-Grained Relevance Annotations for Multi-Task Document Ranking and Question Answering. In International Conference on Information and Knowledge Management, Proceedings (pp. 3031–3038). Association for Computing Machinery. https://doi.org/10.1145/3340531.3412878

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free