Recent models achieve promising results in visually grounded dialogues. However, existing datasets often contain undesirable biases and lack sophisticated linguistic analyses, which make it difficult to understand how well current models recognize their precise linguistic structures. To address this problem, we make two design choices: first, we focus on OneCommon Corpus (Udagawa and Aizawa, 2019, 2020), a simple yet challenging common grounding dataset which contains minimal bias by design. Second, we analyze their linguistic structures based on spatial expressions and provide comprehensive and reliable annotation for 600 dialogues. We show that our annotation captures important linguistic structures including predicate-argument structure, modification and ellipsis. In our experiments, we assess the model’s understanding of these structures through reference resolution. We demonstrate that our annotation can reveal both the strengths and weaknesses of baseline models in essential levels of detail. Overall, we propose a novel framework and resource for investigating fine-grained language understanding in visually grounded dialogues.
CITATION STYLE
Udagawa, T., Yamazaki, T., & Aizawa, A. (2020). A linguistic analysis of visually grounded dialogues based on spatial expressions. In Findings of the Association for Computational Linguistics Findings of ACL: EMNLP 2020 (pp. 750–765). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.findings-emnlp.67
Mendeley helps you to discover research relevant for your work.