The trade-off between feature representation capability and spatial positioning accuracy is crucial to dense classification or semantic segmentation of remote sensing images. In order to better balance the low-level spatial details in the shallow network and the high-level abstract semantics in the deep network, the bilateral attention refinement lightweight network BARNet is introduced. In this way, we can use the fine-grained features in the shallow layer to further supplement and capture the deeper information of the high-level semantic features. The network employs an asymmetric encoder decoder architecture for the task of real-time semantic segmentation. Encoder part proposes a lightweight network residual unit with the split, concatenate and split bottleneck structure to achieve more light weighted, effificient and powerful feature extraction. In the decoding section, we propose an adaptive method to enhance feature representation in local attention enhancement module. In addition, the global context embedding module is introduced to divide the high-level features into two branches. One branch gets the weight vector to guide the low-level learning, and the other branch will get a semantic vector, which is used to calculate the multi-label category loss and further introduce into the overall loss function to regulate the training process better. The effectiveness and efficiency of the network are verified on ISPRS Potsdam data set and CCF data set, respectively. The results show that the models using these strategies outperform the baseline network on MIoU, PA and F1, which increase by 18.86%, 16.21% and 15.64% on the Potsdam dataset; 10.51%, 6.53% and 8.19% on the CCF dataset.
CITATION STYLE
Cai, J., Liu, C., Yan, H., Wu, X., Lu, W., Wang, X., & Sang, C. (2021). Real-Time Semantic Segmentation of Remote Sensing Images Based on Bilateral Attention Refined Network. IEEE Access, 9, 28349–28360. https://doi.org/10.1109/ACCESS.2021.3058571
Mendeley helps you to discover research relevant for your work.