Machine Comprehension (MC) tests the ability of the machine to answer a question about a given passage. It requires modeling complex interactions between the passage and the question. Recently, attention mechanisms have been successfully extended to machine comprehension. In this work, the question and passage are encoded using BERT language embeddings to better capture the respective representations at a semantic level. Then, attention and fusion are conducted horizontally and vertically across layers at different levels of granularity between question and paragraph. Our experiments were performed on the datasets provided in MRQA shared task 2019.
CITATION STYLE
Osama, R., El-Makky, N., & Torki, M. (2019). Question answering using hierarchical attention on top of bert features. In MRQA@EMNLP 2019 - Proceedings of the 2nd Workshop on Machine Reading for Question Answering (pp. 191–195). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d19-5825
Mendeley helps you to discover research relevant for your work.