Abstract
Down-sampling is widely adopted in deep convo-lutional neural networks (DCNN) for reducing the number of network parameters while preserving the transformation invariance. However, it cannot utilize information effectively because it only adopts a fixed stride strategy, which may result in poor generalization ability and information loss. In this paper, we propose a novel random strategy to alleviate these problems by embedding random shifting in the down-sampling layers during the training process. Random shifting can be universally applied to diverse DCNN models to dynamically adjust receptive fields by shifting kernel centers on feature maps in different directions. Thus, it can generate more robust features in networks and further enhance the transformation invariance of down-sampling operators. In addition, random shifting cannot only be integrated in all down-sampling layers including strided convolutional layers and pooling layers, but also improve performance of DCNN with negligible additional computational cost. We evaluate our method in different tasks (e.g., image classification and segmentation) with various network architectures (i.e., AlexNet, FCN and DFN-MR). Experimental results demonstrate the effectiveness of our proposed method.
Cite
CITATION STYLE
Zhao, G., Wang, J., & Zhang, Z. (2017). Random shifting for CNN: A solution to reduce information loss in Down-sampling layers. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 3476–3482). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/486
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.