This article presents an investigation into the problem of 3D radar echo extrapolation in precipitation nowcasting, using recent AI advances, together with a viewpoint from Computer Vision. While Deep Learning methods, especially convolutional recurrent neural networks, have been developed to perform extrapolation, most works use 2D radar images rather than 3D images. In addition, the very few ones which try 3D data do not show a clear picture of results. Through this study, we found a potential problem in the convolution-based prediction of 3D data, which is similar to the cross-talk effect in multi-channel radar processing but has not been documented well in the literature, and discovered the root cause. The problem was that, when we generated different channels using one receptive field, some information in a channel, especially observation errors, might affect other channels unexpectedly. We found that, when using the early-stopping technique to avoid over-fitting, the receptive field did not learn enough to cancel unnecessary information. If we increased the number of training iterations, this effect could be reduced but that might worsen the over-fitting situation. We therefore proposed a new output generation block which generates each channel separately and showed the improvement. Moreover, we also found that common image augmentation techniques in Computer Vision can be helpful for radar echo extrapolation, improving testing mean squared error of employed models at least 20% in our experiments.
CITATION STYLE
Tran, Q. K., & Song, S. K. (2019). Multi-channel weather radar echo extrapolation with convolutional recurrent neural networks. Remote Sensing, 11(19). https://doi.org/10.3390/rs11192303
Mendeley helps you to discover research relevant for your work.