Most error correction interfaces for speech recognition applications on smartphones require the user to first mark an error region and choose the correct word from a candidate list. We propose a simple multimodal interface to make the process more efficient. We develop Long Context Match (LCM) to get candidates that complement the conventional word confusion network (WCN). Assuming that not only the preceding words but also the succeeding words of the error region are validated by users, we use such contexts to search higher-order n-grams corpora for matching word sequences. For this purpose, we also utilize the Web text data. Furthermore, we propose a combination of LCM andWCN ("LCM + WCN") to provide users with candidate lists that are more relevant than those yielded by WCN alone. We compare our interface with the WCNbased interface on the Corpus of Spontaneous Japanese (CSJ). Our proposed "LCM + WCN" method improved the 1-best accuracy by 23%, improved the Mean Reciprocal Rank (MRR) by 28%, and our interface reduced the user's load by 12%.
CITATION STYLE
Liang, Y., Iwano, K., & Shinoda, K. (2015). Error correction using long context match for smartphone speech recognition. IEICE Transactions on Information and Systems, E98D(11), 1932–1942. https://doi.org/10.1587/transinf.2015EDP7179
Mendeley helps you to discover research relevant for your work.