KAdam: Using the Kalman Filter to Improve Adam algorithm

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Nowadays, the Adam algorithm has become one of the most popular optimizers to train feed-forward neural networks because it takes the best features of other gradient-based optimizers, such as working well with sparse gradients, in online and non-stationary settings, and also it is very robust to the rescaling of the gradient. The above makes Adam the best choice to solve problems with non-stationary objectives, very noise gradients, and with large data inputs. In this work, we enhanced the Adam algorithm by using the Kalman filter, and the novel proposal is called KAdam. Instead of using the computed gradients directly from the cost function, we first apply the Kalman filter on them. As a result, the filtered gradients allow the algorithm to explore new (and potentially better) solutions on the cost function. The results obtained when applying our proposal and other state-of-the-art optimizers to solve classification problems show that KAdam is able to obtain better accuracies than its competitors in the same execution time.

Cite

CITATION STYLE

APA

Camacho, J. D., Villaseñor, C., Alanis, A. Y., Lopez-Franco, C., & Arana-Daniel, N. (2019). KAdam: Using the Kalman Filter to Improve Adam algorithm. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11896 LNCS, pp. 429–438). Springer. https://doi.org/10.1007/978-3-030-33904-3_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free