What can Neural Referential Form Selectors Learn?

4Citations
Citations of this article
45Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Despite achieving encouraging results, neural Referring Expression Generation models are often thought to lack transparency. We probed neural Referential Form Selection (RFS) models to find out to what extent the linguistic features influencing the RE form are learnt and captured by state-of-the-art RFS models. The results of 8 probing tasks show that all the defined features were learnt to some extent. The probing tasks pertaining to referential status and syntactic position exhibited the highest performance. The lowest performance was achieved by the probing models designed to predict discourse structure properties beyond the sentence level.

Cite

CITATION STYLE

APA

Chen, G., Same, F., & van Deemter, K. (2021). What can Neural Referential Form Selectors Learn? In INLG 2021 - 14th International Conference on Natural Language Generation, Proceedings (pp. 154–166). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.inlg-1.15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free