CNN-based visual/auditory feature fusion method with frame selection for classifying video events

5Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

In recent years, personal videos have been shared online due to the popular uses of portable devices, such as smartphones and action cameras. A recent report[1] predicted that 80% of the Internet traffic will be video content by the year 2021. Several studies have been conducted on the detection of main video events to manage a large scale of videos. These studies show fairly good performance in certain genres. However, the methods used in previous studies have difficulty in detecting events of personal video. This is because the characteristics and genres of personal videos vary widely. In a research, we found that adding a dataset with the right perspective in the study improved performance. It has also been shown that performance improves depending on how you extract keyframes from the video. we selected frame segments that can represent video considering the characteristics of this personal video. In each frame segment, object, location, food and audio features were extracted, and representative vectors were generated through a CNN-based recurrent model and a fusion module. The proposed method showed mAP 78.4% performance through experiments using LSVC[2] data.

Cite

CITATION STYLE

APA

Choe, G., Lee, S., & Nang, J. (2019). CNN-based visual/auditory feature fusion method with frame selection for classifying video events. KSII Transactions on Internet and Information Systems, 13(3), 1689–1701. https://doi.org/10.3837/tiis.2019.03.033

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free