Automatic annotation of an ultrasound corpus for studying tongue movement

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Silent speech interfaces can work as an alternative way of interaction in situations where the acoustic speech signal is absent (e.g., speech impairments) or is not suited for the current context (e.g., environmental noise). The goal is to use external data to infer/improve speech recognition. Surface electromyography (sEMG) is one of the modalities used to gather such data, but its applicability still needs to be further explored involving methods to provide reference data about the phenomena under study. A notable example concerns exploring sEMG to detect tongue movements. To that purpose, along with the acquisition of the sEMG, a modality that allows observing the tongue, such as ultrasound imaging, must also be synchronously acquired. In these experiments, manual annotation of the tongue movement in the ultrasound sequences, to allow the systematic analysis of the sEMG signals, is mostly infeasible. This is mainly due to the size of the data involved and the need to maintain uniform annotation criteria. Therefore, to address this task, we present an automatic method for tongue movement detection and annotation in ultrasound sequences. Preliminary evaluation comparing the obtained results with 72 manual annotations shows good agreement.

Cite

CITATION STYLE

APA

Silva, S., & Teixeira, A. (2014). Automatic annotation of an ultrasound corpus for studying tongue movement. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8814, pp. 469–476). Springer Verlag. https://doi.org/10.1007/978-3-319-11758-4_51

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free