Human understanding of spoken language appears to integrate the use of contextual expectations with acoustic level perception in a tightly-coupled, sequential fashion. Yet computer speech understanding systems typically pass the transcript produced by a speech recognizer into a natural language parser with no integration of acoustic and grammatical constraints. One reason for this is the complexity of implementing that integration. To address this issue we have created a robust, semantic parser as a single finite-state machine (FSM). As such, its run-time action is less complex than other robust parsers that are based on either chart or generalized left-right (GLR) architectures. Therefore, we believe it is ultimately more amenable to direct integration with a speech decoder.
CITATION STYLE
Kaiser, E. C. (1999). Robust, finite-state parsing for spoken language understanding. In Proceedings of the Annual Meeting of the Association for Computational Linguistics (Vol. 1999-June, pp. 573–578). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1034678.1034683
Mendeley helps you to discover research relevant for your work.