Deep Transfer Learning Approach for Robust Hand Detection

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Human hand detection in uncontrolled environments is a challenging visual recognition task due to numerous variations of hand poses and background image clutter. To achieve highly accurate results as well as provide real-time execution, we proposed a deep transfer learning approach over the state-of-the-art deep learning object detector. Our method, denoted as YOLOHANDS, is built on top of the You Only Look Once (YOLO) deep learning architecture, which is modified to adapt to the single class hand detection task. The model transfer is performed by modifying the higher convolutional layers including the last fully connected layer, while initializing lower non-modified layers with the generic pretrained weights. To address robustness issues, we introduced a comprehensive augmentation procedure over the training image dataset, specifically adapted for the hand detection problem. Experimental evaluation of the proposed method, which is performed on a challenging public dataset, has demonstrated highly accurate results, comparable to the state-of-the-art methods.

Cite

CITATION STYLE

APA

Cvetkovic, S., Savic, N., & Ciric, I. (2023). Deep Transfer Learning Approach for Robust Hand Detection. Intelligent Automation and Soft Computing, 36(1), 967–979. https://doi.org/10.32604/iasc.2023.032526

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free