A hybrid network model for tibetan question answering

6Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Currently, research on question answering (QA) with deep learning methods is a hotspot in natural language processing. In addition, most of the research mainly focused on English or Chinese since there are large-scale open corpora, such as WikiQA or DoubanQA. However, how to use deep learning methods to QA of the low resource languages, like Tibetan becomes a challenge. In this paper, we propose a hybrid network model for the Tibetan QA, which combines the convolutional neural network and long short memory network (LSTM) to extract effective features from small-scale corpora. Meanwhile, since the strong grammar rules of Tibetan, we use the language model to decode the output of the LSTM layer which makes the answer more accurate and smoother. In addition, we add the batch normalization to accelerate deep network training and prevent overfitting. Finally, the experiments show that the ACC@1 value of the proposed model in Tibetan QA is 126.2% higher than the baseline model.

Cite

CITATION STYLE

APA

Sun, Y., & Xia, T. (2019). A hybrid network model for tibetan question answering. IEEE Access, 7, 52769–52777. https://doi.org/10.1109/ACCESS.2019.2911320

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free