Child video dataset tool to develop object tracking simulates babysitter vision robot

2Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

This study presents a Child Video Dataset (CVDS) that has numerous videos of different ages and situation of children. To simulate a babysitter's vision, our application was developed to track objects in a scene with the main goal of creating a reliable and operative moving child-object detection system. The aim of this study is to explore novel algorithms to track a child-object in an indoor and outdoor background video. It focuses on tracking a whole child-object while simultaneously tracking the body parts of that object to produce a positive system. This effort suggests an approach for labeling three body sections, i.e., the head, upper and lower sections and then for detecting a specific area within the three sections and tracking this section using a Gaussian Mixture Model (GMM) algorithm according to the labeling technique. The system is applied in three situations: Child-object walking, crawling and seated moving. During system experimentation, walking object tracking provided the best performance, achieving 91.932% for body-part tracking and 96.235% for whole-object tracking. Crawling object tracking achieved 90.832% for body-part tracking and 96.231% for whole object tracking. Finally, seated-moving-object tracking achieved 89.7% for body-part tracking and 93.4% for whole-object tracking.

Cite

CITATION STYLE

APA

Aljuaid, H., & Mohamad, D. (2014). Child video dataset tool to develop object tracking simulates babysitter vision robot. Journal of Computer Science, 10(2), 296–304. https://doi.org/10.3844/jcssp.2014.296.304

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free