Which acoustic cues are important for understanding spoken language? Traditionally, the speech signal is described mainly in spectral terms (i.e., the distribution of energy across the acoustic frequency axis). In contrast, temporal properties are often ignored. However, there is mounting evidence that low-frequency energy modulations play a crucial role, particularly those below 16 Hz (e.g., Christiansen and Greenberg 2005; Drullman, Festen and Plomp 1994; Greenberg and Arai 2004; Houtgast and Steeneken 1985). Modulations higher than 16 Hz may also contribute under certain conditions (Apoux and Bacon 2004; Christiansen and Greenberg 2005; Greenberg and Arai 2004; Silipo, Greenberg and Arai 1999). Currently lacking is a detailed understanding of how amplitudemodulation cues are combined across the acoustic frequency spectrum, as well as how spectral and temporal information interact. Such knowledge could enhance our understanding of how spoken language is processed in noisy and reverberant environments by both normal and hearing-impaired individuals.
CITATION STYLE
Christiansen, T. U., Dau, T., & Greenberg, S. (2007). Spectro-temporal Processing of Speech – An Information-Theoretic Framework. In Hearing – From Sensory Processing to Perception (pp. 517–523). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-73009-5_55
Mendeley helps you to discover research relevant for your work.