Visual phonemic ambiguity and speechreading

27Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Purpose: To study the role of visual perception of phonemes in visual perception of sentences and words among normal-hearing individuals. Method: Twenty-four normal-hearing adults identified consonants, words, and sentences, spoken by either a human or a synthetic talker. The synthetic talker was programmed with identical parameters within phoneme groups, hypothetically resulting in simplified articulation. Proportions of correctly identified phonemes per participant, condition, and task, as well as sensitivity to single consonants and clusters of consonants, were measured. Groups of mutually exclusive consonants were used for sensitivity analyses and hierarchical cluster analyses. Results: Consonant identification performance did not differ as a function of talker, nor did average sensitivity to single consonants. The bilabial and labiodental clusters were most readily identified and cohesive for both talkers. Word and sentence identification was better for the human talker than the synthetic talker. The participants were more sensitive to the clusters of the least visible consonants with the human talker than with the synthetic talker. Conclusions: It is suggested that ability to distiguish between clusters of the least visually distinct phonemes is important in speechreading. Specifically, it reduces the number of candidates, and thereby facilitates lexical identification. © American Speech-Language-Hearing Association.

Cite

CITATION STYLE

APA

Lidestam, B., & Beskow, J. (2006). Visual phonemic ambiguity and speechreading. Journal of Speech, Language, and Hearing Research, 49(4), 835–847. https://doi.org/10.1044/1092-4388(2006/059)

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free