A multi-modal classifier for heart sound recordings

10Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Information extracted from heart sound signals are associated with valvular heart diseases and other cardiovascular disorders. This study aims to develop a computational framework for the classification of a given heart sound recording. Different techniques have their respective superiority in classifying heart sound recordings with various patterns, and it is difficult to find one technique that outperforms all the others. We hence propose a multi-modal classifier by fusing the classification results from various techniques based on various features. Using the data obtained from the 2016 PhysioNet/CinC Challenge, we generate two different feature sets: one set was calculated from segmented results by peak-finding method, and the other set was extracted by audio signal analysis. We then assess the performance of two classification techniques - support vector machines (SVMs) and extreme learning machines (ELMs) - by feeding them with the best subset of features selected from these two feature sets. The final heart sound classification result (normal / abnormal) is determined by ensembling the two classifiers with voting. The best performance out of five online entries achieved an overall score of 0.83 with sensitivity=0.70 and specificity=0.96.

Cite

CITATION STYLE

APA

Yang, X., Yang, F., Gobeawan, L., Yeo, S. Y., Leng, S., Zhong, L., & Su, Y. (2016). A multi-modal classifier for heart sound recordings. In Computing in Cardiology (Vol. 43, pp. 1165–1168). IEEE Computer Society. https://doi.org/10.22489/cinc.2016.339-225

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free