In this work, we study the problem of unsupervised open-domain keyphrase generation, where the objective is a keyphrase generation model that can be built without using human-labeled data and can perform consistently across domains. To solve this problem, we propose a seq2seq model that consists of two modules, namely phraseness and informativeness module, both of which can be built in an unsupervised and open-domain fashion. The phraseness module generates phrases, while the informativeness module guides the generation towards those that represent the core concepts of the text. We thoroughly evaluate our proposed method using eight benchmark datasets from different domains. Results on in-domain datasets show that our approach achieves state-of-the-art results compared with existing unsupervised models, and overall narrows the gap between supervised and unsupervised methods down to about 16%. Furthermore, we demonstrate that our model performs consistently across domains, as it overall surpasses the baselines on out-of-domain datasets.
CITATION STYLE
Do, L. T., Akash, P. S., & Chang, K. C. C. (2023). Unsupervised Open-domain Keyphrase Generation. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 10614–10627). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-long.592
Mendeley helps you to discover research relevant for your work.