Increasing the number of chewing can help reduce obesity. Nevertheless, it is difficult for a person to keep track of his mastication rate without the help of an automatic mastication counting device. Such devices do exist, but they are big and non-portable and are not suitable for daily use. In our previous work, we proposed an optimization model for classification of chewing, swallowing, and speaking activities using sound data collected by a bone conduction microphone in a natural eating environment. In this paper, we aim to implement a system that could automatically recognize a person’s eating gestures (e.g., mastication, swallowing, and utterance) in real time. In realizing this, it is necessary to add the other sounds such as noises in the model so that it is more robust to natural meal environment. Therefore, in this study, we proposed an optimization for classification method adding the other sounds to three eating activities.
CITATION STYLE
Kondo, T., Kamachi, H., Ishii, S., Yokokubo, A., & Lopez, G. (2019). Robust classification of eating sound collected in natural meal environment. In UbiComp/ISWC 2019- - Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers (pp. 105–108). Association for Computing Machinery, Inc. https://doi.org/10.1145/3341162.3343780
Mendeley helps you to discover research relevant for your work.