A Deeper Look into ‘Deep Learning of Aftershock Patterns Following Large Earthquakes’: Illustrating First Principles in Neural Network Physical Interpretability

5Citations
Citations of this article
23Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In the last years, deep learning has solved seemingly intractable problems, boosting the hope to find (approximate) solutions to problems that now are considered unsolvable. Earthquake prediction - a recognized moonshot challenge - is obviously worthwhile exploring with deep learning. Although encouraging results have been obtained recently, deep neural networks (DNN) may sometimes create the illusion that patterns hidden in data are complex when this is not necessarily the case. We investigate the results of De Vries et al. [Nature, vol. 560, 2018] who defined a DNN of 6 hidden layers with 50 nodes each, and with an input layer of 12 stress features, to predict aftershock patterns in space. The performance of their DNN was assessed using ROC with AUC = 0.85 obtained. We first show that a simple artificial neural network (ANN) of 1 hidden layer yields a similar performance, suggesting that aftershock patterns are not necessarily highly abstract objects. Following first principle guidance, we then bypass the elastic stress change tensor computation, making profit of the tensorial nature of neural networks. AUC = 0.85 is again reached with an ANN, now with only two geometric and kinematic features. Not only seems deep learning to be “excessive” in the present case, the simpler ANN streamlines the process of aftershock forecasting, limits model bias, and provides better insights into aftershock physics and possible model improvement. Complexification is a controversial trend in all of Science and first principles should be applied wherever possible to gain physical interpretations of neural networks.

Cite

CITATION STYLE

APA

Mignan, A., & Broccardo, M. (2019). A Deeper Look into ‘Deep Learning of Aftershock Patterns Following Large Earthquakes’: Illustrating First Principles in Neural Network Physical Interpretability. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11506 LNCS, pp. 3–14). Springer Verlag. https://doi.org/10.1007/978-3-030-20521-8_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free