A neural network model for a view independent extraction of reach-to-grasp action features

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The aim of this paper is to introduce a novel, biologically inspired approach to extract visual features relevant for controlling and understanding reach-to-grasp actions. One of the most relevant of such features has been found to be the grip-size defined as the index finger-tip - thumb-tip distance. For this reason, in this paper we focus on this feature. The human visual system is naturally able to recognize many hand configurations - e.g. gestures or different types of grasps - without being affected substantially by the (observer) viewpoint. The proposed computational model preserves this nice ability. It is very likely that this ability may play a crucial role in action understanding within primates (and thus human beings). More specifically, a family of neurons in macaque's ventral premotor area F5 have been discovered which are highly active in correlation with a series of grasp-like movements. This findings triggered a fierce debate about imitation and learning, and inspired several computational models among which the most detailed is due to Oztop and Arbib (MNS model). As a variant of the MNS model, in a previous paper, we proposed the MEP model which relies on an expected perception mechanism. However, both models assume the existence of a mechanism to extract visual features in a viewpoint independent way but neither of them faces the problem of how this mechanism can be achieved in a biologically plausible way. In this paper we propose a neural network model for the extraction of visual features in a viewpoint independent manner, which is based on the work by Poggio and Riesenhuber. © 2007 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Prevete, R., Santoro, M., Catanzariti, E., & Tessitore, G. (2007). A neural network model for a view independent extraction of reach-to-grasp action features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4729 LNCS, pp. 124–133). https://doi.org/10.1007/978-3-540-75555-5_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free