Vision-based event detection of the sit-To-stand transition

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

Sit-To-stand (STS) motions are one of the most important activities of daily living as they serve as a precursor to mobility and walking. However, there exist no standard method of segmenting STS motions. This is partially due to the variety of different sensors and modalities used to study the STS motion such as force plate, vision, and accelerometers, each providing different types of data, and the variability of the STS motion in video data. In this work, we present a method using motion capture to detect events in the STS motion by estimating ground reaction forces, thereby eliminating the variability in joint angles from visual data. We illustrate the accuracy of this method with 10 subjects with an average difference of 16.5ms in event times obtained via motion capture vs force plate. This method serves as a proof of concept for detecting events in the STS motion via video which are comparable to those obtained via force plate.

Author supplied keywords

Cite

CITATION STYLE

APA

Shia, V., & Bajcsy, R. (2015). Vision-based event detection of the sit-To-stand transition. In MOBIHEALTH 2015 - 5th EAI International Conference on Wireless Mobile Communication and Healthcare - Transforming Healthcare through Innovations in Mobile and Wireless Technologies. ICST. https://doi.org/10.4108/eai.14-10-2015.2261631

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free