Word Embedding by Unlinking Head and Tail Entities in Crime Classification Model

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Word embedding is one of the natural language processing. It is designed to represent the entities and relations with vectors or matrix to make knowledge graph model. Recently many related models and methods were proposed, such as translational methods, deep learning based methods, multiplicative approaches. We proposed an embedding method by unlink the relation of head and tail entity representation when these two are the same entity. By doing so, it can free the relation space thus can have more representations. By comparing some typical word embedding algorithms and methods, we found there are tradeoff problem to deal with between algorithm’s simplicity and expressiveness. After optimizing the parameter of our proposed embedding method, we test this embedding on the HMN model, a model used to generate auto-judge system in the law area. We carefully replaced the encoder part of the model using our embedding strategy, and tested the modified HMN model on real legal data set. The result showed our embedding method has some privileges on performance.

Cite

CITATION STYLE

APA

Huang, Q., & Ouyang, W. (2020). Word Embedding by Unlinking Head and Tail Entities in Crime Classification Model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12465 LNAI, pp. 555–564). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60796-8_48

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free