A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification

3Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

In recent years, large language models (LLMs) have achieved strong performance on benchmark tasks, especially in zero or few-shot settings. However, these benchmarks often do not adequately address the challenges posed in the real-world, such as that of hierarchical classification. In order to address this challenge, we propose refactoring conventional tasks on hierarchical datasets into a more indicative long-tail prediction task. We observe LLMs are more prone to failure in these cases. To address these limitations, we propose the use of entailment-contradiction prediction in conjunction with LLMs, which allows for strong performance in a strict zero-shot setting. Importantly, our method does not require any parameter updates, a resource-intensive process and achieves strong performance across multiple datasets.

Cite

CITATION STYLE

APA

Bhambhoria, R., Chen, L., & Zhu, X. (2023). A Simple and Effective Framework for Strict Zero-Shot Hierarchical Classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 1782–1792). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-short.152

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free