Consistency Training with Virtual Adversarial Discrete Perturbation

8Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.

Abstract

Consistency training regularizes a model by enforcing predictions of original and perturbed inputs to be similar. Previous studies have proposed various augmentation methods for the perturbation but are limited in that they are agnostic to the training model. Thus, the perturbed samples may not aid in regularization due to their ease of classification from the model. In this context, we propose an augmentation method of adding a discrete noise that would incur the highest divergence between predictions. This virtual adversarial discrete noise obtained by replacing a small portion of tokens while keeping original semantics as much as possible efficiently pushes a training model's decision boundary. Experimental results show that our proposed method outperforms other consistency training baselines with text editing, paraphrasing, or a continuous noise on semi-supervised text classification tasks and a robustness benchmark.

References Powered by Scopus

A large annotated corpus for learning natural language inference

2554Citations
N/AReaders
Get full text

Virtual Adversarial Training: A Regularization Method for Supervised and Semi-Supervised Learning

1934Citations
N/AReaders
Get full text

Improving neural machine translation models with monolingual data

1645Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Improving the Sample Efficiency of Prompt Tuning with Domain Adaptation

6Citations
N/AReaders
Get full text

LMGAN: Linguistically Informed Semi-Supervised GAN with Multiple Generators

3Citations
N/AReaders
Get full text

ROAST: Robustifying Language Models via Adversarial Perturbation with Selective Training

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Park, J., Kim, G., & Kang, J. (2022). Consistency Training with Virtual Adversarial Discrete Perturbation. In NAACL 2022 - 2022 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 5646–5656). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.naacl-main.414

Readers over time

‘21‘22‘23‘24‘250481216

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 7

64%

Researcher 3

27%

Lecturer / Post doc 1

9%

Readers' Discipline

Tooltip

Computer Science 12

75%

Linguistics 2

13%

Neuroscience 1

6%

Agricultural and Biological Sciences 1

6%

Save time finding and organizing research with Mendeley

Sign up for free
0