Answering visual what-if questions: From actions to predicted scene descriptions

1Citations
Citations of this article
39Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In-depth scene descriptions and question answering tasks have greatly increased the scope of today’s definition of scene understanding. While such tasks are in principle open ended, current formulations primarily focus on describing only the current state of the scenes under consideration. In contrast, in this paper, we focus on the future states of the scenes which are also conditioned on actions. We posit this as a question answering task, where an answer has to be given about a future scene state, given observations of the current scene, and a question that includes a hypothetical action. Our solution is a hybrid model which integrates a physics engine into a question answering architecture in order to anticipate future scene states resulting from object-object interactions caused by an action. We demonstrate first results on this challenging new problem and compare to baselines, where we outperform fully data-driven end-to-end learning approaches.

Cite

CITATION STYLE

APA

Wagner, M., Basevi, H., Shetty, R., Li, W., Malinowski, M., Fritz, M., & Leonardis, A. (2019). Answering visual what-if questions: From actions to predicted scene descriptions. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11129 LNCS, pp. 521–537). Springer Verlag. https://doi.org/10.1007/978-3-030-11009-3_32

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free