Efficient Classification of Long Documents Using Transformers

40Citations
Citations of this article
82Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Several methods have been proposed for classifying long textual documents using Transformers. However, there is a lack of consensus on a benchmark to enable a fair comparison among different approaches. In this paper, we provide a comprehensive evaluation of the relative efficacy measured against various baselines and diverse datasets - both in terms of accuracy as well as time and space overheads. Our datasets cover binary, multi-class, and multi-label classification tasks and represent various ways information is organized in a long text (e.g. information that is critical to making the classification decision is at the beginning or toward the end of the document). Our results show that more complex models often fail to outperform simple baselines and yield inconsistent performance across datasets. These findings emphasize the need for future studies to consider comprehensive baselines and datasets that better represent the task of long document classification to develop robust models.

Cite

CITATION STYLE

APA

Park, H. H., Vyas, Y., & Shah, K. (2022). Efficient Classification of Long Documents Using Transformers. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 2, pp. 702–709). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.acl-short.79

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free