Improving Accuracy in Word Class Tagging through the Combination of Machine Learning Systems

  • Halteren H
  • Zavrel J
  • Daelemans W
  • 67

    Readers

    Mendeley users who have this article in their library.
  • 94

    Citations

    Citations of this article.

Abstract

We examine how differences in language models, learned by different data-driven systems performing the same NLP task, can be exploited to yield a higher accuracy than the best individual system. We do this by means of experiments involving the task of morphosyntactic word class tagging, on the basis of three different tagged corpora. Four well-known tagger generators (hidden Markov model, memory-based, transformation rules, and maximum entropy) are trained on the same corpus data. After comparison, their outputs are combined using several voting strategies and second-stage classifiers. All combination taggers outperform their best component. The reduction in error rate varies with the material in question, but can be as high as 24.3% with the LOB corpus.

Get free article suggestions today

Mendeley saves you time finding and organizing research

Sign up here
Already have an account ?Sign in

Find this document

Authors

  • Hans van Halteren

  • Jakub Zavrel

  • Walter Daelemans

Cite this document

Choose a citation style from the tabs below

Save time finding and organizing research with Mendeley

Sign up for free