Research in the area of multimodal displays and information processing has reported several benefits of distributing information across multiple sensory channels (vision, audition, and touch, in particular). However, with few exceptions, studies on multimodal information processing involve the potential risk of confounding modality with other factors, such as salience, because no cross-modal matching is being performed prior to experiments. To date, no agreed-upon cross-modal matching method has been developed. The goal of our research is to develop and compare the feasibility and validity of various approaches. In this paper, we present the findings for one particular technique that employs cue adjustments and bidirectional matches. Six participants were asked to perform a series of 216 matching tasks for combinations of cues in vision, audition and touch. The results show that participants’ matches differed from one another, were inconsistent across trials, and were also a function of the intensity level of the initial cue. The findings from this research further highlight the need for careful matching of multimodal cues in research on multisensory information processing and will result in refinements of the proposed technique.
CITATION STYLE
Pitts, B. J., Lu, S. A., & Sarter, N. B. (2013). Cross-modal matching. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 57(1), 1760–1764. https://doi.org/10.1177/1541931213571393
Mendeley helps you to discover research relevant for your work.