Training neural networks with implicit variance

1Citations
Citations of this article
22Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a novel method to train predictive Gaussian distributions p(z|x) for regression problems with neural networks. While most approaches either ignore or explicitly model the variance as another response variable, it is trained implicitly in our case. Establishing stochasticty by the injection of noise into the input and hidden units, the outputs are approximated with a Gaussian distribution by the forward propagation method introduced for fast dropout [1]. We have designed our method to respect that probabilistic interpretation of the output units in the loss function. The method is evaluated on a synthetic and a inverse robot dynamics task, yielding superior performance to plain neural networks, Gaussian processes and LWPR in terms of likelihood. © Springer-Verlag 2013.

Cite

CITATION STYLE

APA

Bayer, J., Osendorfer, C., Urban, S., & Van Der Smagt, P. (2013). Training neural networks with implicit variance. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8227 LNCS, pp. 132–139). https://doi.org/10.1007/978-3-642-42042-9_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free