Multi-Task Knowledge Distillation with Embedding Constraints for Scholarly Keyphrase Boundary Classification

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

The task of scholarly keyphrase boundary classification aims at identifying keyphrases from scientific papers and classifying them with their types from a set of predefined classes (e.g., task, process, or material). Despite the importance of keyphrases and their types in many downstream applications including indexing, searching, and question answering over scientific documents, scholarly keyphrase boundary classification is still an under-explored task. In this work, we propose a novel embedding constraint on multitask knowledge distillation which enforces the teachers (single-task models) and the student (multi-task model) similarity in the embedding space. Specifically, we enforce that the student model is trained not only to imitate the teachers' output distribution over classes, but also to produce language representations that are similar to those produced by the teachers. Our results show that the proposed approach outperforms previous works and strong baselines on three datasets of scientific documents.

Cite

CITATION STYLE

APA

Park, S. Y., & Caragea, C. (2023). Multi-Task Knowledge Distillation with Embedding Constraints for Scholarly Keyphrase Boundary Classification. In EMNLP 2023 - 2023 Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 13026–13042). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.emnlp-main.805

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free