A dropout distribution model on deep transfer learning networks

0Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep transfer learning usually hypothesizes the distribution of features are similar on training dataset, which causes a constant assumption of distance. However, this standpoint leads to an unclear relation between features and accuracy, and weakens the networks ability of overfitting prevention as well. To achieve a better result in transfer learning, we propose a dynamic model of deep transfer learning network with the influence of features in learning mission. First, we exhibit the distance and dropout rates function in a formal way. Second, we propose our model with algorithm in deep transfer learning networks. With the data of preprocessed MNIST and CIFAR-10, we conduct the reasonable experiment to compare the performance of transfer learning networks and conventional ones. The results show that the model we presented in this paper works better in both correction and productivity.

Cite

CITATION STYLE

APA

Li, F., Yang, H., & Wang, J. (2016). A dropout distribution model on deep transfer learning networks. In Lecture Notes in Electrical Engineering (Vol. 376, pp. 431–439). Springer Verlag. https://doi.org/10.1007/978-981-10-0557-2_43

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free