We introduce a regularization technique to improve system identification for dual-task learning with recurrent neural networks. In particular, the method is introduced using the Factored Tensor Recurrent Neural Networks first presented in [1]. Our goal is to identify a dynamical system with few available observations by augmenting them with data from a sufficiently observed similar system. In our previous work, we discovered that the model accuracy degrades whenever little data of the system of interest is available. The presented regularization term in this work allows to significantly reduce the model error thereby improving the exploitation of knowledge of the well observed system. This scenario is crucial in many real world applications, where data efficiency plays an important role. We motivate the problem setting and our regularized dual-task learning approach by industrial use cases, e.g. gas or wind turbine modeling for optimization and monitoring. Then, we formalize the problem and describe our regularization term by which the learning objective of the Factored Tensor Recurrent Neural Network is extended. Finally, we demonstrate its effectiveness on the cart-pole and mountain car benchmarks. © 2014 Springer International Publishing Switzerland.
CITATION STYLE
Spieckermann, S., Düll, S., Udluft, S., & Runkler, T. (2014). Regularized recurrent neural networks for data efficient dual-task learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8681 LNCS, pp. 17–24). Springer Verlag. https://doi.org/10.1007/978-3-319-11179-7_3
Mendeley helps you to discover research relevant for your work.