In recent years, the Sq-MRC (machine reading comprehension for separate questions) task, where the questioner poses a separate question each time, has experienced rapid development. However, we often encounter circumstances where the questioner poses a series of nonstop questions. The corresponding task, called Cq-MRC (machine reading comprehension for continuous questions), has been rarely investigated and is nontrivial due to (1) the incomplete expressions of subsequent questions, such as anaphora or subject ellipsis, and (2) the shortage of annotated samples for model training and performance evaluation. We explore this challenging task, in this article, and propose a completion-then-prediction approach. To tackle the first issue, we identify the key-entity in the first question and employ it to complete the subsequent ones, and hence decompose the task into a set of Sq-MRC tasks which are accomplished using a BERT-based model. In addition, to mitigate the shortage of annotated samples, we train a meta-learner on the Sq-MRC task which is related to Cq-MRC and has abundant labeled data, and then perform few-shot learning by fine-tuning the model on our target task, viz. Cq-MRC, using the limited number of samples available. The results of extensive experiments conducted on a multi-document Chinese machine reading comprehension dataset show the effectiveness and superiority of our method, achieving 2.3% to 3.7% absolute improvements in terms of major metrics.
CITATION STYLE
Yang, K., Zhang, X., & Chen, D. (2021). Exploring Machine Reading Comprehension for Continuous Questions via Subsequent Question Completion. IEEE Access, 9, 12622–12634. https://doi.org/10.1109/ACCESS.2021.3050490
Mendeley helps you to discover research relevant for your work.