Keyphrase generation aims at generating important phrases (keyphrases) that best describe a given document. In scholarly domains, current approaches have largely used only the title and abstract of the articles to generate keyphrases. In this paper, we comprehensively explore whether the integration of additional information from the full text of a given article or from semantically similar articles can be helpful for a neural keyphrase generation model or not. We discover that adding sentences from the full text, particularly in the form of the extractive summary of the article can significantly improve the generation of both types of keyphrases that are either present or absent from the text. Experimental results with three widely used models for keyphrase generation along with one of the latest transformer models suitable for longer documents, Longformer Encoder-Decoder (LED) validate the observation. We also present a new large-scale scholarly dataset FULLTEXTKP for keyphrase generation. Unlike prior large-scale datasets, FULLTEXTKP includes the full text of the articles along with the title and abstract. We release the source code at https://github.com/kgarg8/FullTextKP.
CITATION STYLE
Garg, K., Chowdhury, J. R., & Caragea, C. (2022). Keyphrase Generation Beyond the Boundaries of Title and Abstract. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 5838–5850). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.427
Mendeley helps you to discover research relevant for your work.