Character-based models become more and more popular for different natural language processing task, especially due to the success of neural networks. They provide the possibility of directly model text sequences without the need of tokenization and, therefore, enhance the traditional preprocessing pipeline. This paper provides an overview of character-based models for a variety of natural language processing tasks. We group existing work in three categories: tokenization-based approaches, bag-of-n-gram models and end-to-end models. For each category, we present prominent examples of studies with a particular focus on recent character-based deep learning work.
CITATION STYLE
Adel, H., Asgari, E., & Schütze, H. (2018). Overview of character-based models for natural language processing. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10761 LNCS, pp. 3–16). Springer Verlag. https://doi.org/10.1007/978-3-319-77113-7_1
Mendeley helps you to discover research relevant for your work.