Training algorithm matters for the performance of neural network potential: A case study of Adam and the Kalman filter optimizers

9Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

One hidden yet important issue for developing neural network potentials (NNPs) is the choice of training algorithm. In this article, we compare the performance of two popular training algorithms, the adaptive moment estimation algorithm (Adam) and the extended Kalman filter algorithm (EKF), using the Behler-Parrinello neural network and two publicly accessible datasets of liquid water [Morawietz et al., Proc. Natl. Acad. Sci. U. S. A. 113, 8368-8373, (2016) and Cheng et al., Proc. Natl. Acad. Sci. U. S. A. 116, 1110-1115, (2019)]. This is achieved by implementing EKF in TensorFlow. It is found that NNPs trained with EKF are more transferable and less sensitive to the value of the learning rate, as compared to Adam. In both cases, error metrics of the validation set do not always serve as a good indicator for the actual performance of NNPs. Instead, we show that their performance correlates well with a Fisher information based similarity measure.

Cite

CITATION STYLE

APA

Shao, Y., Dietrich, F. M., Nettelblad, C., & Zhang, C. (2021). Training algorithm matters for the performance of neural network potential: A case study of Adam and the Kalman filter optimizers. Journal of Chemical Physics, 155(20). https://doi.org/10.1063/5.0070931

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free