Pay attention and you won't lose it: A deep learning approach to sequence imputation

3Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

In most areas of machine learning, it is assumed that data quality is fairly consistent between training and inference. Unfortunately, in real systems, data are plagued by noise, loss, and various other quality reducing factors. While a number of deep learning algorithms solve end-stage problems of prediction and classification, very few aim to solve the intermediate problems of data pre-processing, cleaning, and restoration. Long Short-Term Memory (LSTM) networks have previously been proposed as a solution for data restoration, but they suffer from a major bottleneck: a large number of sequential operations. We propose using attention mechanisms to entirely replace the recurrent components of these data-restoration networks.Wedemonstrate that such an approach leads to reduced model sizes by as many as two orders of magnitude, a 2-fold to 4-fold reduction in training times, and 95% accuracy for automotive data restoration. We also show in a case study that this approach improves the performance of downstream algorithms reliant on clean data.

Cite

CITATION STYLE

APA

Sucholutsky, I., Narayan, A., Schonlau, M., & Fischmeister, S. (2019). Pay attention and you won’t lose it: A deep learning approach to sequence imputation. PeerJ Computer Science, 2019(8). https://doi.org/10.7717/peerj-cs.210

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free