For many years, the working/short-term memory literature has been dominated by the study of phonological codes. Consequently, insufficient attention has been devoted to visual codes. In the present study, we attempt to remedy the situation by exploring a critical aspect of modern models of working memory, namely the principle that responses do not depend primarily on what kinds of materials are presented, but on what kinds of codes are generated from those materials. More specifically, we used the visual similarity effect as a tool to ask whether there is a generation of visual codes when information is not presented visually. In two immediate serial recall experiments, we manipulated the visual similarity (similar words, dissimilar words), the presentation modality (visual presentation, auditory presentation), and concurrent articulation (none, concurrent articulation). We observed a visual similarity effect independent of presentation modality. Comparable results were observed with two different sets of stimuli and with or without concurrent articulation. Thus, for the first time, we demonstrate that, from acoustically presented word lists, visual codes in working/short-term memory are generated, producing a visual similarity effect. It is now clear that the encoding of visual or acoustic presentation to include the opposite type of representation is bidirectional.
CITATION STYLE
Guitard, D., & Cowan, N. (2020). Do we use visual codes when information is not presented visually? Memory and Cognition, 48(8), 1522–1536. https://doi.org/10.3758/s13421-020-01054-0
Mendeley helps you to discover research relevant for your work.