Residual MobileNets

2Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As modern convolutional neural networks become increasingly deeper, they also become slower and require high computational resources beyond the capabilities of many mobile and embedded platforms. To address this challenge, much of the recent research has focused on reducing the model size and computational complexity. In this paper, we propose a novel residual depth-separable convolution block, which is an improvement of the basic building block of MobileNets. We modified the original block by adding an identity shortcut connection (with zero-padding for increasing dimensions) from the input to the output. We demonstrated that the modified architecture with the width multiplier (α) set to 0.92 slightly outperforms the accuracy and inference time of the baseline MobileNet (α = 1) on the challenging Places365 dataset while reducing the number of parameters by 14%.

Cite

CITATION STYLE

APA

Brzeski, A., Grinholc, K., Nowodworski, K., & Przybylek, A. (2019). Residual MobileNets. In Communications in Computer and Information Science (Vol. 1064, pp. 315–324). Springer Verlag. https://doi.org/10.1007/978-3-030-30278-8_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free