Abstract
This paper reports on the progress of a co-creative pretend play agent designed to interact with users by recognizing and responding to playful actions in a 2D virtual environment. In particular, we describe the design and evaluation of a classifier that recognizes 2D motion trajectories from the user’s actions. The performance of the classifier is evaluated using a publicly available dataset of labeled actions highly relevant to the domain of pretend play. We show that deep convolutional neural networks perform significantly better in recognizing these actions than previously employed methods. We also describe the plan for implementing a virtual play environment using the classifier in which the users and agent can collaboratively construct narratives during improvisational pretend play.
Cite
CITATION STYLE
Singh, K. Y., Davis, N., Hsiao, C. P., Jacob, M., Patel, K., & Magerko, B. (2016). Recognizing Actions in Motion Trajectories Using Deep Neural Networks. In Proceedings - AAAI Artificial Intelligence and Interactive Digital Entertainment Conference, AIIDE (pp. 211–217). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aiide.v12i1.12881
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.