Transformer-Based Neural Network for Answer Selection in Question Answering

103Citations
Citations of this article
103Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Answer selection is a crucial subtask in the question answering (QA) system. Conventional avenues for this task mainly concentrate on developing linguistic tools that are limited in both performance and practicability. Answer selection approaches based on deep learning have been well investigated with the tremendous success of deep learning in natural language processing. However, the traditional neural networks employed in existing answer selection models, i.e., recursive neural network or convolutional neural network, typically suffer from obtaining the global text information due to their operating mechanisms. The recent Transformer neural network is considered to be good at extracting the global information by employing only self-attention mechanism. Thus, in this paper, we design a Transformer-based neural network for answer selection, where we deploy a bidirectional long short-term memory (BiLSTM) behind the Transformer to acquire both global information and sequential features in the question or answer sentence. Different from the original Transformer, our Transformer-based network focuses on sentence embedding rather than the seq2seq task. In addition, we employ a BiLSTM rather than utilizing the position encoding to incorporate sequential features as the universal Transformer does. Furthermore, we apply three aggregated strategies to generate sentence embeddings for question and answer, i.e., the weighted mean pooling, the max pooling, and the attentive pooling, leading to three corresponding Transformer-based models, i.e., QA-TFWP , QA-TFMP , and QA-TFAP , respectively. Finally, we evaluate our proposals on a popular QA dataset WikiQA. The experimental results demonstrate that our proposed Transformer-based answer selection models can produce a better performance compared with several competitive baselines. In detail, our best model outperforms the state-of-the-art baseline by up to 2.37%, 2.83%, and 3.79% in terms of MAP, MRR, and accuracy, respectively.

Cite

CITATION STYLE

APA

Shao, T., Guo, Y., Chen, H., & Hao, Z. (2019). Transformer-Based Neural Network for Answer Selection in Question Answering. IEEE Access, 7, 26146–26156. https://doi.org/10.1109/ACCESS.2019.2900753

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free