Adaptive Retraining of Visual Recognition-Model in Human Activity Recognition by Collaborative Humanoid Robots

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a vision-based activity recognition system for centrally connected humanoid robots. The robots interact with several human participants who have varying behavioral styles and inter-activity-variability. A cloud server provides and updates the recognition model in all robots. The server continuously fetches the new activity videos recorded by the robots. It also fetches corresponding results and ground-truths provided by the human interacting with the robot. A decision on when to retrain the recognition model is made by an evolving performance-based logic. In the current article, we present the aforementioned adaptive recognition system with special emphasis on the partitioning logic employed for the division of new videos in training, cross-validation, and test groups of the next retraining instance. The distinct operating logic is based on class-wise recognition inaccuracies of the existing model. We compare this approach to a probabilistic partitioning approach in which the videos are partitioned with no performance considerations.

Cite

CITATION STYLE

APA

Nagrath, V., Hariz, M., & Yacoubi, M. A. E. (2021). Adaptive Retraining of Visual Recognition-Model in Human Activity Recognition by Collaborative Humanoid Robots. In Advances in Intelligent Systems and Computing (Vol. 1251 AISC, pp. 124–143). Springer. https://doi.org/10.1007/978-3-030-55187-2_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free