RN-vid: A feature fusion architecture for video object detection

3Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Consecutive frames in a video are highly redundant. Therefore, to perform the task of video object detection, executing single frame detectors on every frame without reusing any information is quite wasteful. It is with this idea in mind that we propose RN-VID (standing for RetinaNet-VIDeo), a novel approach to video object detection. Our contributions are twofold. First, we propose a new architecture that allows the usage of information from nearby frames to enhance feature maps. Second, we propose a novel module to merge feature maps of same dimensions using re-ordering of channels and 1 × 1 convolutions. We then demonstrate that RN-VID achieves better mean average precision (mAP) than corresponding single frame detectors with little additional cost during inference.

Cite

CITATION STYLE

APA

Perreault, H., Heritier, M., Gravel, P., Bilodeau, G. A., & Saunier, N. (2020). RN-vid: A feature fusion architecture for video object detection. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12131 LNCS, pp. 125–138). Springer. https://doi.org/10.1007/978-3-030-50347-5_12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free