Study of the load balancing in the parallel training for automatic speech recognition

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper we propose a parallelization technique of the training phase for the automatic speech recognition using the Hidden Markov Models (HMMs), which improves the load balancing in the previous proposed parallel implementations [1]. This technique is based on an efficient distribution of the vocabulary on processors taking into account, not only the size of the vocabulary, but also the length of each word. In this manner the idle time will be reduced. The experimental results show that good performances can be obtained with this distribution.

Cite

CITATION STYLE

APA

Daoudi, E. M., Manneback, P., Meziane, A., & Hadj, Y. O. M. E. (2000). Study of the load balancing in the parallel training for automatic speech recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1900, pp. 506–510). Springer Verlag. https://doi.org/10.1007/3-540-44520-x_67

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free