Analysis of chewing sounds for dietary monitoring

159Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The paper reports the results of the first stage of our work on an automatic dietary monitoring system. The work is part of a large European project on using ubiquitous systems to support healthy lifestyle and cardiovascular disease prevention. We demonstrate that sound from the user's mouth can be used to detect that he/she is eating. The paper also shows how different kinds of food can be recognized by analyzing chewing sounds. The sounds are acquired with a microphone located inside the ear canal. This is an unobtrusive location widely accepted in other applications (hearing aids, headsets). To validate our method we present experimental results containing 3500 seconds of chewing data from four subjects on four different food types typically found in a meal. Up to 99% accuracy is achieved on eating recognition and between 80% to 100% on food type classification. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Amft, O., Stäger, M., Lukowicz, P., & Tröster, G. (2005). Analysis of chewing sounds for dietary monitoring. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3660 LNCS, pp. 56–72). Springer Verlag. https://doi.org/10.1007/11551201_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free