Self-Training using Rules of Grammar for Few-Shot NLU

3Citations
Citations of this article
48Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We tackle the problem of self-training networks for NLU in low-resource environment - few labeled data and lots of unlabeled data. The effectiveness of self-training is a result of increasing the amount of training data while training. Yet it becomes less effective in lowresource settings due to unreliable labels predicted by the teacher model on unlabeled data. Rules of grammar, which describe the grammatical structure of data, have been used in NLU for better explainability. We propose to use rules of grammar in self-training as a more reliable pseudo-labeling mechanism, especially when there are few labeled data. We design an effective algorithm that constructs and expands rules of grammar without human involvement. Then we integrate the constructed rules as a pseudo-labeling mechanism into self-training. There are two possible scenarios regarding data distribution: it is unknown or known in prior to training. We empirically demonstrate that our approach substantially outperforms the state-of-the-art methods in three benchmark datasets for both scenarios.

Cite

CITATION STYLE

APA

Hahn, J., Cheon, H., Han, K., Lee, C., Kim, J., & Han, Y. S. (2021). Self-Training using Rules of Grammar for Few-Shot NLU. In Findings of the Association for Computational Linguistics, Findings of ACL: EMNLP 2021 (pp. 4576–4581). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.findings-emnlp.389

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free