Improving human behavior using pomdps with gestures and speech recognition

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This work proposes a decision-theoretic approach to problems involving interaction between robot systems and human users, with the goal of estimating the human state from observations of its behavior, and taking actions that encourage desired behaviors. The approach is based on the Partially Observable Markov Decision Process (POMDP) framework, which determines an optimal policy (mapping beliefs onto actions) in the presence of uncertainty on the effects of actions and state observations, extended with information rewards (POMDP-IR) to optimize the information-gathering capabilities of the system. The POMDP observations consist of human gestures and spoken sentences, while the actions are split into robot behaviors (such as speaking to the human) and information-reward actions to gain more information about the human state. Under the proposed framework, the robot system is able to actively gain information and react to its belief on the state of the human (expressed as a probability mass function over the discrete state space), effectively encouraging the human to improve his/her behavior, in a socially acceptable manner. Results of applying the method to a real scenario of interaction between a robot and humans are presented, supporting its practical use.

Cite

CITATION STYLE

APA

Garcia, J. A., & Lima, P. U. (2019). Improving human behavior using pomdps with gestures and speech recognition. In Intelligent Systems, Control and Automation: Science and Engineering (Vol. 94, pp. 145–163). Springer Netherlands. https://doi.org/10.1007/978-3-319-97550-4_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free