Video captioning in Vietnamese using deep learning

6Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

With the development of today's society, demand for applications using digital cameras jumps over year by year. However, analyzing large amounts of video data causes one of the most challenging issues. In addition to storing the data captured by the camera, intelligent systems are required to quickly analyze the data to correct important situations. In this paper, we use deep learning techniques to build automatic models that describe movements on video. To solve the problem, we use three deep learning models: sequence-to-sequence model based on recurrent neural network, sequence-to-sequence model with attention and transformer model. We evaluate the effectiveness of the approaches based on the results of three models. To train these models, we use Microsoft research video description corpus (MSVD) dataset including 1970 videos and 85,550 captions translated into Vietnamese. In order to ensure the description of the content in Vietnamese, we also combine it with the natural language processing (NLP) model for Vietnamese.

Cite

CITATION STYLE

APA

Phuc, D. T., Trieu, T. Q., Van Tinh, N., & Hieu, D. S. (2022). Video captioning in Vietnamese using deep learning. International Journal of Electrical and Computer Engineering, 12(3), 3092–3103. https://doi.org/10.11591/ijece.v12i3.pp3092-3103

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free