Abstract
For vision-and-language (VL) reasoning tasks, both fully connectionist, end-to-end methods and hybrid, neuro-symbolic methods have achieved high in-distribution performance. In which out-of-distribution settings does each paradigm excel? We investigate this question on both single-image and multi-image visual question-answering through four types of generalization tests: a novel segment-combine test for multi-image queries, contrast set, compositional generalization, and cross-benchmark transfer. Vision-and-language end-to-end (VLE2E) trained systems exhibit size-able performance drops across all these tests. Neuro-symbolic (NS) methods suffer even more on cross-benchmark transfer from GQA to VQA, but they show smaller accuracy drops on the other generalization tests and their performance quickly improves by few-shot training. Overall, our results demonstrate the complementary benefits of these two paradigms, and emphasize the importance of using a diverse suite of generalization tests to fully characterize model robustness to distribution shift.
Cite
CITATION STYLE
Zhu, W., Thomason, J., & Jia, R. (2022). Generalization Differences between End-to-End and Neuro-Symbolic Vision-Language Reasoning Systems. In Findings of the Association for Computational Linguistics: EMNLP 2022 (pp. 4726–4740). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-emnlp.344
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.