Word embedding is one of the natural language processing. It is designed to represent the entities and relations with vectors or matrix to make knowledge graph model. Recently many related models and methods were proposed, such as translational methods, deep learning based methods, multiplicative approaches. We proposed an embedding method by unlink the relation of head and tail entity representation when these two are the same entity. By doing so, it can free the relation space thus can have more representations. By comparing some typical word embedding algorithms and methods, we found there are tradeoff problem to deal with between algorithm’s simplicity and expressiveness. After optimizing the parameter of our proposed embedding method, we test this embedding on the HMN model, a model used to generate auto-judge system in the law area. We carefully replaced the encoder part of the model using our embedding strategy, and tested the modified HMN model on real legal data set. The result showed our embedding method has some privileges on performance.
CITATION STYLE
Huang, Q., & Ouyang, W. (2020). Word Embedding by Unlinking Head and Tail Entities in Crime Classification Model. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12465 LNAI, pp. 555–564). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-60796-8_48
Mendeley helps you to discover research relevant for your work.