Data efficient approaches on deep action recognition in videos

ISSN: 22498958
3Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.

Abstract

In this paper, we ipropose an efficient visual tracker, which specifically catches a ibounding box containing the target object in a video by methods for successive iactivities got the hang of utilizing deep neural networks. The iproposed deep neural network to control following activities is ipre-prepared utilizing different preparing video sequences and calibrated amid igenuine following for online adjustment to a difference in target and background. The pre-training is done by using deep Reinforcement ilearning just as directed learning. The utilization of RL iempowers even mostly named data to be effectively used for semi-directed learning. Through the assessment of the item following ibenchmark data set, the proposed tracker is approved to accomplish an aggressive exhibition at three times the speed of present deep network-based trackers.

Cite

CITATION STYLE

APA

Sathya, R., Rugveda Muralidhar, I., Sai Harsha Vardhan, K., Sri Karan, R., & Arun Reddy, B. (2019). Data efficient approaches on deep action recognition in videos. International Journal of Engineering and Advanced Technology, 8(4), 385–391.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free