A model for visio-haptic attention for efficient resource allocation in multimodal environments

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Sequences of visual and haptic exploration were obtained on surfaces of different curvature from human subjects. We then extracted regions of interest (ROI) from the data as a function of number of times a subject fixated on a certain location on object and amount of time spent on such each location. Simple models like a plane, cone, cylinder, paraboloid, hyperboloid, ellipsoid, simple-saddle and a monkey-saddle were generated. Gaussian curvature representation of each point on all the surfaces was pre-computed. The surfaces have been previously tested for haptic and visual realism and distinctness by human subjects in a separate experiment. Both visual and haptic rendering were subsequently used for exploration by human subjects to study whether there is a similarity between the visual ROI and haptic ROIs. Additionally, we wanted to see if there is a correlation between curvature values and the ROIs thus obtained. A multiple regression model was further developed to see if this data can be used to predict the visual exploration path using haptic curvature saliency measures. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Tripathi, P., Kahol, K., Sridaran, A., & Panchanathan, S. (2007). A model for visio-haptic attention for efficient resource allocation in multimodal environments. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4565 LNAI, pp. 329–336). Springer Verlag. https://doi.org/10.1007/978-3-540-73216-7_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free