Abstract
The averaged-perceptron learning algorithm is simple, versatile and effective. However, when used in NLP settings it tends to produce very dense solutions, while much sparser ones are also possible. We present a simple modification to the perceptron algorithm which allows it to produce sparser solutions while remaining accurate and computationally efficient. We test the method on a multiclass classification task, a structured prediction task, and a guided learning task. In all of the experiments the method produced models which are about 4-5 times smaller than the averaged perceptron, while remaining as accurate.
Cite
CITATION STYLE
Goldberg, Y., & Elhadad, M. (2011). Learning Sparser Perceptron Models. Acl. Retrieved from http://www.cs.bgu.ac.il/~yoavg/publications/acl2011sparse.pdf
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.