Abstract
Reasoning about events and their relations attracts surging research efforts since it is regarded as an indispensable ability to fulfill various event-centric or common-sense reasoning tasks. However, these tasks often suffer from limited data availability due to the labor-intensive nature of their annotations. Consequently, recent studies have explored knowledge transfer approaches within a multi-task learning framework to address this challenge. Although such methods have achieved acceptable results, such brute-force solutions struggle to effectively transfer event-relational knowledge due to the vast array of inter-event relations (e.g. temporal, causal, conditional) and reasoning formulations (e.g. discriminative, abductive, ending prediction). To enhance knowledge transfer and enable zero-shot generalization among various combinations, in this work we propose a novel unified framework, called UNIEVENT. Inspired by prefix-based multitask learning, our approach organizes event relational reasoning tasks into a coordinate system with multiple axes, representing inter-event relations and reasoning formulations. We then train a unified text-to-text generative model that utilizes coordinate-assigning prefixes for each task. By leveraging our adapted prefixes, our unified model achieves state-of-the-art or competitive performance on both zero-shot and supervised reasoning tasks, as demonstrated in extensive experiments.
Cite
CITATION STYLE
Tao, Z., Jin, Z., Zhao, H., Dou, C., Zhao, Y., Shen, T., & Tao, C. (2023). Unified Generative Model with Multi-Dimensional Prefix for Zero-Shot Event-Relational Reasoning. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1, pp. 7088–7102). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.acl-industry.58
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.