Abstract
Automatic analysis of impaired speech for screening or diagnosis is a growing research field; however there are still many barriers to a fully automated approach. When automatic speech recognition is used to obtain the speech transcripts, sentence boundaries must be inserted before most measures of syntactic complexity can be computed. In this paper, we consider how language impairments can affect segmentation methods, and compare the results of computing syntactic complexity metrics on automatically and manually segmented transcripts. We find that the important boundary indicators and the resulting segmentation accuracy can vary depending on the type of impairment observed, but that results on patient data are generally similar to control data. We also find that a number of syntactic complexity metrics are robust to the types of segmentation errors that are typically made.
Cite
CITATION STYLE
Fraser, K. C., Ben-David, N., Hirst, G., Graham, N. L., & Rochon, E. (2015). Sentence segmentation of aphasic speech. In NAACL HLT 2015 - 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Proceedings of the Conference (pp. 862–871). Association for Computational Linguistics (ACL). https://doi.org/10.3115/v1/n15-1087
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.