Abstract
Semantic segmentation has always been a fundamental and critical task to scene understanding. Current deep convolutional neural networks (DCNN) are able to successfully learn context from very large receptive fields due to convolutions with deep layers. However, current convolutions in DCNNs does not consider local object boundaries that are the borders among different semantic regions. Convolution with equal contribution on the pixels across the boundary may lead to inferior segmentation results. In this paper, a novel boundary-aware convolution is proposed. It is able to effectively fuse features by adaptively assigning contributions from pixels within receptive fields according to the boundary similarity map. A new semantic segmentation network based on classical FCN8S is then designed by employing multi-scale boundary-aware convolution. The whole network is implemented end-to-end and evaluated with heterogeneous RGB and depth input. Experiments conducted on multiple datasets show that our boundary-aware CNN can effectively improve the semantic segmentation performance.
Author supplied keywords
Cite
CITATION STYLE
Zou, N., Xiang, Z., Chen, Y., Chen, S., & Qiao, C. (2019). Boundary-aware CNN for semantic segmentation. IEEE Access, 7, 114520–114528. https://doi.org/10.1109/ACCESS.2019.2935816
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.