Adversarial Training for Machine Reading Comprehension with Virtual Embeddings

4Citations
Citations of this article
53Readers
Mendeley users who have this article in their library.

Abstract

Adversarial training (AT) as a regularization method has proved its effectiveness on various tasks. Though there are successful applications of AT on some NLP tasks, the distinguishing characteristics of NLP tasks have not been exploited. In this paper, we aim to apply AT on machine reading comprehension (MRC) tasks. Furthermore, we adapt AT for MRC tasks by proposing a novel adversarial training method called PQAT that perturbs the embedding matrix instead of word vectors. To differentiate the roles of passages and questions, PQAT uses additional virtual P/Q-embedding matrices to gather the global perturbations of words from passages and questions separately. We test the method on a wide range of MRC tasks, including span-based extractive RC and multiple-choice RC. The results show that adversarial training is effective universally, and PQAT further improves the performance.

References Powered by Scopus

SQuad: 100,000+ questions for machine comprehension of text

4050Citations
N/AReaders
Get full text

Know what you don’t know: Unanswerable questions for SQuAD

1390Citations
N/AReaders
Get full text

Adversarial examples for evaluating reading comprehension systems

906Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Improving the robustness of machine reading comprehension via contrastive learning

5Citations
N/AReaders
Get full text

Combining permuted language model and adversarial training for Chinese machine reading comprehension

1Citations
N/AReaders
Get full text

RobustQA: A Framework for Adversarial Text Generation Analysis on Question Answering Systems

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Yang, Z., Cui, Y., Si, C., Che, W., Liu, T., Wang, S., & Hu, G. (2021). Adversarial Training for Machine Reading Comprehension with Virtual Embeddings. In *SEM 2021 - 10th Conference on Lexical and Computational Semantics, Proceedings of the Conference (pp. 308–313). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.starsem-1.30

Readers over time

‘21‘22‘23‘24‘2505101520

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 14

70%

Researcher 4

20%

Professor / Associate Prof. 1

5%

Lecturer / Post doc 1

5%

Readers' Discipline

Tooltip

Computer Science 19

76%

Linguistics 4

16%

Neuroscience 1

4%

Social Sciences 1

4%

Save time finding and organizing research with Mendeley

Sign up for free
0