The meaning of natural language text is supported by cohesion among various kinds of entities, including coreference relations, predicate-argument structures, and bridging anaphora relations. However, predicate-argument structures for nominal predicates and bridging anaphora relations have not been studied well, and their analyses have been still very difficult. Recent advances in neural networks, in particular, self training-based language models including BERT (Devlin et al., 2019), have significantly improved many natural language processing tasks, making it possible to dive into the study on analysis of cohesion in the whole text. In this study, we tackle an integrated analysis of cohesion in Japanese texts. Our results significantly outperformed existing studies in each task, especially about 10 to 20 point improvement both for zero anaphora and coreference resolution. Furthermore, we also showed that coreference resolution is different in nature from the other tasks and should be treated specially.
CITATION STYLE
Ueda, N., Kawahara, D., & Kurohashi, S. (2020). BERT-based Cohesion Analysis of Japanese Texts. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 1323–1333). Association for Computational Linguistics (ACL). https://doi.org/10.5715/jnlp.28.705
Mendeley helps you to discover research relevant for your work.