Measuring fine-grained domain relevance of terms: A hierarchical core-fringe approach

1Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

We propose to measure fine-grained domain relevance- the degree that a term is relevant to a broad (e.g., computer science) or narrow (e.g., deep learning) domain. Such measurement is crucial for many downstream tasks in natural language processing. To handle long-tail terms, we build a core-anchored semantic graph, which uses core terms with rich description information to bridge the vast remaining fringe terms semantically. To support a fine-grained domain without relying on a matching corpus for supervision, we develop hierarchical core-fringe learning, which learns core and fringe terms jointly in a semi-supervised manner contextualized in the hierarchy of the domain. To reduce expensive human efforts, we employ automatic annotation and hierarchical positive-unlabeled learning. Our approach applies to big or small domains, covers head or tail terms, and requires little human effort. Extensive experiments demonstrate that our methods outperform strong baselines and even surpass professional human performance.

Cite

CITATION STYLE

APA

Huang, J., Chang, K. C. C., Xiong, J., & Hwu, W. M. (2021). Measuring fine-grained domain relevance of terms: A hierarchical core-fringe approach. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 3641–3651). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.282

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free