The comparison of autoencoder architectures in improving of prediction models

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In our day many prediction models require to encode the series of events in a way that will allow to train the model and obtain the highest quality of predictions. The encoding of events depends on data domain and applied methods, however one can use neural network to encode the series of actions and obtain informative features for predictive models. We compared several architectures of neural networks in a task of feature extraction for predictive models. The comparison of architectures of neural networks was obtained on the field of sequence modeling, where the popularity of LSTM networks is dominant. We found, that the usage of appropriate event encoding allows to improve the quality of CNN based networks without using the modification of architectures.

Cite

CITATION STYLE

APA

Prosvetov, A. V. (2018). The comparison of autoencoder architectures in improving of prediction models. In Journal of Physics: Conference Series (Vol. 1117). Institute of Physics Publishing. https://doi.org/10.1088/1742-6596/1117/1/012006

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free