An unsupervised machine learning approach to segmentation of clinician-entered free text.

ISSN: 15594076
11Citations
Citations of this article
35Readers
Mendeley users who have this article in their library.

Abstract

Natural language processing, an important tool in biomedicine, fails without successful segmentation of words and sentences. Tokenization is a form of segmentation that identifies boundaries separating semantic units, for example words, dates, numbers and symbols, within a text. We sought to construct a highly generalizeable tokenization algorithm with no prior knowledge of characters or their function, based solely on the inherent statistical properties of token and sentence boundaries. Tokenizing clinician-entered free text, we achieved precision and recall of 92% and 93%, respectively compared to a whitespace token boundary detection algorithm. We classified over 80% of punctuation characters correctly, based on manual disambiguation with high inter-rater agreement (kappa=0.916). Our algorithm effectively discovered properties of whitespace and punctuation in the corpus without prior knowledge of either. Given the dynamic nature of biomedical language, and the variety of distinct sublanguages used, the effectiveness and generalizability of our novel tokenization algorithm make it a valuable tool.

Cite

CITATION STYLE

APA

Wrenn, J. O., Stetson, P. D., & Johnson, S. B. (2007). An unsupervised machine learning approach to segmentation of clinician-entered free text. AMIA ... Annual Symposium Proceedings / AMIA Symposium. AMIA Symposium, 811–815.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free