We address the problem of interactively learning perceptually grounded word meanings in a multimodal dialogue system. We design a semantic and visual processing system to support this and illustrate how they can be integrated. We then focus on comparing the performance (Precision, Recall, F1, AUC) of three state-of-the-art attribute classifiers for the purpose of interactive language grounding (MLKNN, DAP, and SVMs), on the aPascal-aYahoo datasets. In prior work, results were presented for object classification using these methods for attribute labelling, whereas we focus on their performance for attribute labelling itself. We find that while these methods can perform well for some of the attributes (e.g. head, ears, furry) none of these models has good performance over the whole attribute set, and none supports incremental learning. This leads us to suggest directions for future work.
CITATION STYLE
Yu, Y., Eshghi, A., & Lemon, O. (2015). Comparing attribute classifiers for interactive language grounding. In A Workshop of the 2015 Conference on Empirical Methods in Natural Language Processing, EMNLP 2015 - Workshop on Vision and Language 2015, VL 2015: Vision and Language Meet Cognitive Systems - Proceedings (pp. 60–69). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w15-2811
Mendeley helps you to discover research relevant for your work.