We consider Automated List Inspection (ALI), a content-based text recommendation system that assists auditors in matching relevant text passages from notes in financial statements to specific law regulations. ALI follows a ranking paradigm in which a fixed number of requirements per textual passage are shown to the user. Despite achieving impressive ranking performance, the user experience can still be improved by showing a dynamic number of recommendations. Besides, existing models rely on a feature-based language model that needs to be pre-trained on a large corpus of domain-specific datasets. Moreover, they cannot be trained in an end-to-end fashion by jointly optimizing with language model parameters. In this work, we alleviate these concerns by considering a multi-label classification approach that predicts dynamic requirement sequences. We base our model on pre-trained BERT that allows us to fine-tune the whole model in an end-to-end fashion, thereby avoiding the need for training a language representation model. We conclude by presenting a detailed evaluation of the proposed model on two German financial datasets.
CITATION STYLE
Ramamurthy, R., Pielka, M., Stenzel, R., Bauckhage, C., Sifa, R., Khameneh, T. D., … Loitz, R. (2021). ALiBERT: Improved automated list inspection (ALI) with BERT. In DocEng 2021 - Proceedings of the 2021 ACM Symposium on Document Engineering. Association for Computing Machinery, Inc. https://doi.org/10.1145/3469096.3474928
Mendeley helps you to discover research relevant for your work.