Seeking Clozure: Robust Hypernym Extraction from BERT with Anchored Prompts

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The automatic extraction of hypernym knowledge from large language models like BERT is an open problem, and it is unclear whether methods fail due to a lack of knowledge in the model or shortcomings of the extraction methods. In particular, methods fail on challenging cases which include rare or abstract concepts, and perform inconsistently under paraphrased prompts. In this study, we revisit the long line of work on pattern-based hypernym extraction, and use it as a diagnostic tool to thoroughly examine the hypernomy knowledge encoded in BERT and the limitations of hypernym extraction methods. We propose to construct prompts from established pattern structures: definitional (X is a Y); lexico-syntactic (Y such as X); and their anchored versions (Y such as X or Z). We devise an automatic method for anchor prediction, and compare different patterns in: (i) their effectiveness for hypernym retrieval from BERT across six English data sets; (ii) on challenge sets of rare and abstract concepts; and (iii) on consistency under paraphrasing. We show that anchoring is particularly useful for abstract concepts and in enhancing consistency across paraphrases, demonstrating how established methods in the field can inform prompt engineering.

Cite

CITATION STYLE

APA

Cohn, T., Liu, C., & Frermann, L. (2023). Seeking Clozure: Robust Hypernym Extraction from BERT with Anchored Prompts. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 193–206). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.starsem-1.18

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free