Automatic snore sound extraction from sleep sound recordings via auditory image modeling

20Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of humans' auditory abilities is differentiation between sounds with slightly different frequencies. Recently, the auditory image model (AIM) was developed to numerically explain this auditory phenomenon. Acoustic analyses of snore sounds have been performed recently by using non-contact microphones. Snore/non-snore classification techniques have been required at the front-end of snore analyses. The performances of sound classification methods can be evaluated based on human hearing, which is considered to be the gold standard. In this paper, we propose a novel method of automatically extracting snore sounds from sleep sounds by using an AIM-based snore/non-snore classification system. We report that the proposed automatic classification method could achieve a sensitivity of 97.2% and specificity of 96.3% when analyzing snore and non-snore sounds from 40 subjects. It is anticipated that our findings will contribute to the development of an automated snore analysis system to be used in sleep studies.

Cite

CITATION STYLE

APA

Nonaka, R., Emoto, T., Abeyratne, U. R., Jinnouchi, O., Kawata, I., Ohnishi, H., … Kinouchi, Y. (2016). Automatic snore sound extraction from sleep sound recordings via auditory image modeling. Biomedical Signal Processing and Control, 27, 7–14. https://doi.org/10.1016/j.bspc.2015.12.009

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free