Multi-scale bidirectional FCN for object skeleton extraction

22Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Object skeleton detection is a challenging problem with wide application. Recently, deep Convolutional Neural Networks (CNNs) have substantially improved the performance of the state-of-the-art in this task. However, most of the existing CNN-Based methods are based on a skip-layer structure where low-level and high-level features are combined and learned so as to gather multi-level contextual information. As shallow features are too messy and lack semantic knowledge, they may cause errors and inaccuracy. Therefore, we propose a novel network architecture, Multi-Scale Bidirectional Fully Convolutional Network (MSB-FCN), to better capture and consolidate multi-scale high-level context information for object skeleton detection. Our network uses only deep features to build multi-scale feature representations, and employs a bidirectional structure to collect contextual knowledge. Hence the proposed MSB-FCN has the ability to learn the semantic-level information from different sub-regions. Furthermore, we introduce dense connections into the bidirectional structure of our MSB-FCN to ensure that the learning process at each scale can directly encode information from all other scales. Extensive experiments on various commonly used benchmarks demonstrate that the proposed MSB-FCN has achieved significant improvements over the state-of-the-art algorithms.

Cite

CITATION STYLE

APA

Yang, F., Li, X., Cheng, H., Guo, Y., Chen, L., & Li, J. (2018). Multi-scale bidirectional FCN for object skeleton extraction. In 32nd AAAI Conference on Artificial Intelligence, AAAI 2018 (pp. 7461–7468). AAAI press. https://doi.org/10.1609/aaai.v32i1.12288

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free