Analytical reasoning is an essential and challenging task that requires a system to analyze a scenario involving a set of particular circumstances and perform reasoning over it to make conclusions. However, current neural models with implicit reasoning ability struggle to solve this task. In this paper, we study the challenge of analytical reasoning of text and collect a new dataset consisting of questions from the Law School Admission Test from 1991 to 2016. We analyze what knowledge understanding and reasoning abilities are required to do well on this task, and present an approach dubbed ARM. It extracts knowledge such as participants and facts from the context. Such knowledge are applied to an inference engine to deduce legitimate solutions for drawing conclusions. In our experiments, we find that ubiquitous pretrained models struggle to deal with this task as their performance is close to random guess. Results show that ARM outperforms pre-trained models significantly. Moreover, we demonstrate that ARM has better explicit interpretable reasoning ability.
CITATION STYLE
Zhong, W., Wang, S., Tang, D., Xu, Z., Guo, D., Chen, Y., … Duan, N. (2022). Analytical Reasoning of Text. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 2306–2319). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.177
Mendeley helps you to discover research relevant for your work.