Self Supervised Bert for Legal Text Classification

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Critical BERT-based text classification tasks, such as legal text classification, require huge amounts of accurately labeled data. Legal text classification faces two trivial problems: labeling legal data is a sensitive process and can only be carried out by skilled professionals, and legal text is prone to privacy issues hence not all the data can be made available in the public domain. This means that we have limited diversity in the textual data, and to account for this data paucity, we propose a self-supervision approach to train Legal-BERT classifiers. We use the BERT text classifier's knowledge of the class boundaries and perform gradient ascent w.r.t. class logits. Synthetic latent texts are generated through activation maximization. The main advantages over existing SOTAs are that our model: is easy to train, does not require much data but instead uses the synthesized data as fake samples; has less variance that helps to generate texts with good sample quality and diversity. We show the efficacy of the proposed method on the ECHR Violation (Multi-Label) Dataset and the Over-ruling Task Dataset.

Cite

CITATION STYLE

APA

Pal, A., Rajanala, S., Phan, R. C. W., & Wong, K. (2023). Self Supervised Bert for Legal Text Classification. In ICASSP, IEEE International Conference on Acoustics, Speech and Signal Processing - Proceedings (Vol. 2023-June). Institute of Electrical and Electronics Engineers Inc. https://doi.org/10.1109/ICASSP49357.2023.10095308

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free