LDS-Inspired Residual Networks

18Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Residual networks (ResNets) have introduced a milestone for the deep learning community due to their outstanding performance in diverse applications. They enable efficient training of increasingly deep networks, reducing the training difficulty and error. The main intuition behind them is that, instead of mapping the input information, they are mapping a residual part of it. Since the original work, a lot of extensions have been proposed to improve information mapping. In this paper, a novel extension of the residual block is proposed inspired by linear dynamical systems (LDSs), called LDS-ResNet. Specifically, a new module is presented that improves mapping of residual information by transforming it in a hidden state and then mapping it back to the desired feature space using convolutional layers. The proposed module is utilized to construct multi-branch residual blocks for convolutional neural networks. An exploration of possible architectural choices is presented and evaluated. Experimental results show that LDS-ResNet outperforms the original ResNet in image classification and object detection tasks on public datasets such as CIFAR-10/100, ImageNet, VOC, and MOT2017. Moreover, its performance boost is complementary to other extensions of the original network such as pre-activation and bottleneck, as well as stochastic training and Squeeze-Excitation.

Cite

CITATION STYLE

APA

Dimou, A., Ataloglou, D., Dimitropoulos, K., Alvarez, F., & Daras, P. (2019). LDS-Inspired Residual Networks. IEEE Transactions on Circuits and Systems for Video Technology, 29(8), 2363–2375. https://doi.org/10.1109/TCSVT.2018.2869680

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free