Complex Valued Recurrent Neural Network: From Architecture to Training

  • Minin A
  • Knoll A
  • Zimmermann H
N/ACitations
Citations of this article
21Readers
Mendeley users who have this article in their library.

Abstract

Recurrent Neural Networks were invented a long time ago, and dozens of different architectures have been published. In this paper we generalize recurrent architectures to a state space model, and we also generalize the numbers the network can process to the complex domain. We show how to train the recurrent network in the complex valued case, and we present the theorems and procedures to make the training stable. We also show that the complex valued recurrent neural network is a generalization of the real valued counterpart and that it has specific advantages over the latter. We conclude the paper with a discussion of possible applications and scenarios for using these networks.

Cite

CITATION STYLE

APA

Minin, A., Knoll, A., & Zimmermann, H.-G. (2012). Complex Valued Recurrent Neural Network: From Architecture to Training. Journal of Signal and Information Processing, 03(02), 192–197. https://doi.org/10.4236/jsip.2012.32026

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free