Cached Embedding with Random Selection: Optimization Technique to Improve Training Speed of Character-Aware Embedding

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Embedding is widely used in most natural language processing. e.g., neural machine translation, text classification, text abstraction and sentiment analysis etc. Word-based embedding is faster and character-based embedding performs better. In this paper, we explore a way to combine these two embeddings to bridge the gap between word-based and character-based embedding in speed and performance. In the experiments and analysis of Hybrid Embedding, we found it’s difficult to make these two different embeddings generate the same embedding vector, but we still obtain a comparable result. According to the results of analysis, we explore a form of character-based embedding called Cached Embedding that can achieve almost the same performance and reduce the extra training time by almost half compared to character-based embedding.

Cite

CITATION STYLE

APA

Yang, Y., Zhang, H. P., Wu, L., Liu, X., & Zhang, Y. (2020). Cached Embedding with Random Selection: Optimization Technique to Improve Training Speed of Character-Aware Embedding. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12033 LNAI, pp. 51–62). Springer. https://doi.org/10.1007/978-3-030-41964-6_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free