Towards Cross-Domain Transferability of Text Generation Models for Legal Text

3Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Legalese can often be filled with verbose domain-specific jargon which can make it challenging to understand and use for non-experts. Creating succinct summaries of legal documents often makes it easier for user comprehension. However, obtaining labeled data for every domain of legal text is challenging, which makes cross-domain transferability of text generation models for legal text, an important area of research. In this paper, we explore the ability of existing state-of-the-art T5 & BART-based summarization models to transfer across legal domains. We leverage publicly available datasets across four domains for this task, one of which is a new resource for summarizing privacy policies, that we curate and release for academic research. Our experiments demonstrate the low cross-domain transferability of these models, while also highlighting the benefits of combining different domains. Further, we compare the effectiveness of standard metrics for this task and illustrate the vast differences in their performance.

Cite

CITATION STYLE

APA

Kumar, V. B., Bhattacharjee, K., & Gangadharaiah, R. (2022). Towards Cross-Domain Transferability of Text Generation Models for Legal Text. In NLLP 2022 - Natural Legal Language Processing Workshop 2022, Proceedings of the Workshop (pp. 111–118). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.nllp-1.9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free