This paper describes a vision-based computational model of mind-reading that infers complex mental states from head and facial expressions in real-time. The generalization ability of the system is evaluated on videos that were posed by lay people in a relatively uncontrolled recording environment for six mental states-agreeing, concentrating, disagreeing, interested, thinking and unsure. The results show that the system's accuracy is comparable to that of humans on the same corpus. © Springer-Verlag Berlin Heidelberg 2005.
CITATION STYLE
El Kaliouby, R., & Robinson, P. (2005). Generalization of a vision-based computational model of mind-reading. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3784 LNCS, pp. 582–589). Springer Verlag. https://doi.org/10.1007/11573548_75
Mendeley helps you to discover research relevant for your work.