Abstract
This paper presents a method of Japanese dependency structure analysis based on Support Vector Machines (SVMs). Conventional parsing techniques based on Machine Learning framework, such as Decision Trees and Maximum Entropy Models, have difficulty in selecting useful features as well as finding appropriate combination of selected features. On the other hand, it is well-known that SVMs achieve high generalization performance even with input data of very high dimensional feature space. Furthermore, by introducing the Kernel principle, SVMs can carry out the training in high-dimensional spaces with a smaller computational cost independent of their dimensionality. We apply SVMs to Japanese dependency structure identification problem. Experimental results on Kyoto University corpus show that our system achieves the accuracy of 89.09% even with small training data (7958 sentences).
Cite
CITATION STYLE
Kudo, T., & Matsumoto, Y. (2000). Japanese Dependency Structure Analysis Based on Support Vector Machines. In Proceedings of the 2000 Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora, SIGDAT-EMNLP 2000 - Held in conjunction with the 38th Annual Meeting of the Association for Computational Linguistics, ACL 2000 (pp. 18–25). Association for Computational Linguistics (ACL). https://doi.org/10.3115/1117794.1117797
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.