Multi-scale hierarchical residual network for dense captioning

23Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.

Abstract

Recent research on dense captioning based on the recurrent neural network and the convolutional neural network has made a great progress. However, mapping from an image feature space to a description space is a nonlinear and multimodel task, which makes it difficult for the current methods to get accurate results. In this paper, we put forward a novel approach for dense captioning based on hourglass-structured residual learning. Discriminant feature maps are obtained by incorporating dense connected networks and residual learning in our model. Finally, the performance of the approach on the Visual Genome V1.0 dataset and the region labelled MS-COCO (Microsoft Common Objects in Context) dataset are demonstrated. The experimental results have shown that our approach outperforms most current methods.

Cite

CITATION STYLE

APA

Tian, Y., Wang, X., Wu, J., Wang, R., & Yang, B. (2019). Multi-scale hierarchical residual network for dense captioning. Journal of Artificial Intelligence Research, 64, 181–196. https://doi.org/10.1613/jair.1.11338

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free