The opaqueness of deep NLP models has motivated efforts to explain how deep models predict. Recently, work has introduced hierarchical attribution explanations, which calculate attribution scores for compositional text hierarchically to capture compositional semantics. Existing work on hierarchical attributions tends to limit the text groups to a continuous text span, which we call the connecting rule. While easy for humans to read, limiting the attribution unit to a continuous span might lose important long-distance feature interactions for reflecting model predictions. In this work, we introduce a novel strategy for capturing feature interactions and employ it to build hierarchical explanations without the connecting rule. The proposed method can convert ubiquitous non-hierarchical explanations (e.g., LIME) into their corresponding hierarchical versions. Experimental results show the effectiveness of our approach in building high-quality hierarchical explanations.
CITATION STYLE
Ju, Y., Zhang, Y., Liu, K., & Zhao, J. (2023). A Hierarchical Explanation Generation Method Based on Feature Interaction Detection. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (pp. 12600–12611). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-acl.798
Mendeley helps you to discover research relevant for your work.