Using local knowledge graph construction to scale Seq2seq models to multi-document inputs

69Citations
Citations of this article
225Readers
Mendeley users who have this article in their library.

Abstract

Query-based open-domain NLP tasks require information synthesis from long and diverse web results. Current approaches extractively select portions of web text as input to Sequence-to-Sequence models using methods such as TF-IDF ranking. We propose constructing a local graph structured knowledge base for each query, which compresses the web search information and reduces redundancy. We show that by linearizing the graph into a structured input sequence, models can encode the graph representations within a standard Sequence-to-Sequence setting. For two generative tasks with very long text input, long-form question answering and multi-document summarization, feeding graph representations as input can achieve better performance than using retrieved text portions.

Cite

CITATION STYLE

APA

Fan, A., Gardent, C., Braud, C., & Bordes, A. (2019). Using local knowledge graph construction to scale Seq2seq models to multi-document inputs. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 4186–4196). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1428

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free