This paper aims at visualizing deep convolutional neural network interpretations for aerial imagery and understanding how these interpretations change across datasets or when network weights are damaged. Our visualization results offer insights on the generalization power and resilience of commonly used networks, such as VGG16, ResNet50, and DenseNet121. Our experiments on the AID and the UCM aerial datasets demonstrate the emergence of object and texture detectors in convolutional networks commonly used for classification. We further analyze these interpretations when the network is trained on one dataset and tested on another to demonstrate the robustness of feature learning across aerial datasets. We also explore the shift in interpretations when performing transfer learning from an aerial dataset (AID) to a generic object dataset (MS-COCO). These results illustrate how transfer learning benefits the network's internal representations. For analyzing the effects of damages on activation maps, our work proposes to simulate damages by randomly zeroing network weights at different levels of the network. The paper carries out experiments with retraining the network to check if it can recover the lost interpretations. Visualizing changes in the neural network's interpretation when the undamaged weights are updated, allows us to assess the resilience of a network visually. Finally, we propose a new metric for the quantitative assessment of network resilience.
CITATION STYLE
Vasu, B., & Savakis, A. (2020). Resilience and Plasticity of Deep Network Interpretations for Aerial Imagery. IEEE Access, 8, 127491–127506. https://doi.org/10.1109/ACCESS.2020.3008323
Mendeley helps you to discover research relevant for your work.