Recognizing daily activities in realistic environments through depth-based user tracking and hidden conditional random fields for MCI/AD support

7Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper presents a novel framework for the automatic recognition of Activities of Daily Living (ADLs), such as cooking, eating, dishwashing and watching TV, based on depth video processing and Hidden Conditional Random Fields (HCRFs). Depth video is provided by low-cost RGB-D sensors unobtrusively installed in the house. The user’s location, posture, as well as point cloud -based features related to gestures are extracted; a standing/sitting posture detector, as well as novel features expressing head and hand gestures are introduced herein. To model the target activities, we employed discriminative HCRFs and compared them to HMMs. Through experimental evaluation, HCRFs outperformed HMMs in location trajectories-based ADL detection. By fusing trajectories data with posture and the proposed gesture features, ADL detection performance was found to further improve, leading to recognition rates at the level of 90.5% for five target activities in a naturalistic home environment.

Cite

CITATION STYLE

APA

Giakoumis, D., Stavropoulos, G., Kikidis, D., Vasileiadis, M., Votis, K., & Tzovaras, D. (2015). Recognizing daily activities in realistic environments through depth-based user tracking and hidden conditional random fields for MCI/AD support. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8927, pp. 822–838). Springer Verlag. https://doi.org/10.1007/978-3-319-16199-0_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free