Abstract
Most semantic models employed in human-robot interactions concern how a robot can understand commands, but in this article the aim is to present a framework that allows dialogic interaction. The key idea is to use events as the fundamental structures for the semantic representations of a robot. Events are modeled in terms of conceptual spaces and mappings between spaces. It is shown how the semantics of major word classes can be described with the aid of conceptual spaces in a way that is amenable to computer implementations. An event is represented by two vectors, one force vector representing an action and one result vector representing the effect of the action. The two-vector model is then extended by the thematic roles so an event is built up from an agent, an action, a patient, and a result. It is shown how the components of an event can be put together to semantic structures that represent the meanings of sentences. It is argued that a semantic framework based on events can generate a general representational framework for human-robot communication. An implementation of the framework involving communication with an iCub will be described.
Cite
CITATION STYLE
Gärdenfors, P. (2019). Using Event Representations to Generate Robot Semantics. ACM Transactions on Human-Robot Interaction, 8(4), 1–21. https://doi.org/10.1145/3341167
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.