Dynamic subtitle authoring method based on audio analysis for the hearing impaired

3Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The broadcasting and the Internet are important parts of modern society that a life without media is now unimaginable. However, hearing impaired people have difficulty in understanding media content due to the loss of audio information. If subtitles are available, subtitling with video can be helpful. In this paper, we propose a dynamic subtitle authoring method based on audio analysis for the hearing impaired. We analyze the audio signal and explore a set of audio features that include STE, ZCR, Pitch and MFCC. Using these features, we align the subtitle with the speech and match extracted speech features to subtitle as different text colors, sizes and thicknesses. Furthermore, it highlights the text via aligning them with the voice and tagging the speaker ID using the speaker recognition. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Lim, W., Jang, I., & Ahn, C. (2014). Dynamic subtitle authoring method based on audio analysis for the hearing impaired. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8547 LNCS, pp. 53–60). Springer Verlag. https://doi.org/10.1007/978-3-319-08596-8_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free