A note on factor normalization for deep neural network models

4Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Deep neural network (DNN) models often involve high-dimensional features. In most cases, these high-dimensional features can be decomposed into two parts: a low-dimensional factor and residual features with much-reduced variability and inter-feature correlation. This decomposition has several interesting theoretical implications for DNN training. Based on these implications, we develop a novel factor normalization method for better performance. The proposed method leads to a new deep learning model with two important characteristics. First, it allows factor-related feature extraction, and second, it allows for adaptive learning rates for factors and residuals. These model features improve the convergence speed on both training and testing datasets. Multiple empirical experiments are presented to demonstrate the model’s superior performance.

Cite

CITATION STYLE

APA

Qi, H., Zhou, J., & Wang, H. (2022). A note on factor normalization for deep neural network models. Scientific Reports, 12(1). https://doi.org/10.1038/s41598-022-09910-6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free