A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification

5Citations
Citations of this article
42Readers
Mendeley users who have this article in their library.

Abstract

Many recent deep learning-based solutions have adopted the attention mechanism in various tasks in the field of NLP. However, the inherent characteristics of deep learning models and the flexibility of the attention mechanism increase the models' complexity, thus leading to challenges in model explainability. To address this challenge, we propose a novel practical framework by utilizing a two-tier attention architecture to decouple the complexity of explanation and the decision-making process. We apply it in the context of a news article classification task. The experiments on two large-scaled news corpora demonstrate that the proposed model can achieve competitive performance with many state-of-the-art alternatives and illustrate its appropriateness from an explainability perspective. We release the source code here.

Cite

CITATION STYLE

APA

Liu, D., Greene, D., & Dong, R. (2022). A Novel Perspective to Look At Attention: Bi-level Attention-based Explainable Topic Modeling for News Classification. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 2280–2290). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-acl.178

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free