We present BlockBERT, a lightweight and efficient BERT model for better modeling long-distance dependencies. Our model extends BERT by introducing sparse block structures into the attention matrix to reduce both memory consumption and training/inference time, which also enables attention heads to capture either short- or long-range contextual information. We conduct experiments on language model pre-training and several benchmark question answering datasets with various paragraph lengths. BlockBERT uses 18.7-36.1% less memory and 12.0-25.1% less time to learn the model. During testing, BlockBERT saves 27.8% inference time, while having comparable and sometimes better prediction accuracy, compared to an advanced BERT-based model, RoBERTa.
CITATION STYLE
Qiu, J., Ma, H., Levy, O., Yih, W. T., Wang, S., & Tang, J. (2020). Blockwise self-attention for long document understanding. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 2555–2565). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.232
Mendeley helps you to discover research relevant for your work.