Transfer Learning Based Image Visualization Using CNN

  • Giri S
  • Joshi B
N/ACitations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Recent papers have shown that neural networks obtain state-of-the-art perfor- mance on several different sequence tagging tasks. One appealing property of such systems is their generality, as excellent performance can be achieved with a unified architecture and without task-specific feature engineering. However, it is unclear if such systems can be used for tasks without large amounts of training data. In this paper we explore the problem of transfer learning for neural se- quence taggers, where a source task with plentiful annotations (e.g., POS tagging on Penn Treebank) is used to improve performance on a target task with fewer available annotations (e.g., POS tagging for microblogs). We examine the effects of transfer learning for deep hierarchical recurrent networks across domains, ap- plications, and languages, and show that significant improvement can often be obtained. These improvements lead to improvements over the current state-of- the-art on several well-studied tasks.1 1

Cite

CITATION STYLE

APA

Giri, S., & Joshi, B. (2019). Transfer Learning Based Image Visualization Using CNN. International Journal of Artificial Intelligence & Applications, 10(4), 47–55. https://doi.org/10.5121/ijaia.2019.10404

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free