Pocketear: An assistive sound classification system for hearing-impaired

0Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the design and operation of an assistive system called PocketEAR which is primarily targeted towards hearing-impaired users. It helps them with orientation in acoustically active environments by continuously monitoring and classifying the incoming sounds and displaying the captured sound classes to the users. The environmental sound recognizer is designed as a two-stage deep convolutional neural network classifier (consists of the so-called superclassifier and a set of the so-called subclassifiers) fed with sequences of MFCC vectors. It is wrapped in a distributed client-server system where the sound capturing in terrain, (pre)processing and displaying of the classification results are performed by instances of a mobile client application, and the actual classification and maintenance are carried out by two co-operating servers. The paper discusses in details the architecture of the environmental sound classifier as well as the used task-specific sound processing.

Cite

CITATION STYLE

APA

Ekštein, K. (2019). Pocketear: An assistive sound classification system for hearing-impaired. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11658 LNAI, pp. 82–92). Springer Verlag. https://doi.org/10.1007/978-3-030-26061-3_9

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free