Markov Models Applications in Natural Language Processing: A Survey

23Citations
Citations of this article
37Readers
Mendeley users who have this article in their library.

Abstract

Markov models are one of the widely used techniques in machine learning to process natural language. Mar-kov Chains and Hidden Markov Models are stochastic techniques employed for modeling systems that are dynamic and where the future state relies on the current state. The Markov chain, which generates a sequence of words to create a complete sentence, is frequently used in generating natural language. The hidden Markov model is employed in named-entity recognition and the tagging of parts of speech, which tries to predict hidden tags based on observed words. This paper reviews Markov models' use in three applications of natural language processing (NLP): Natural language genera-tion, named-entity recognition, and parts of speech tagging. Nowadays, researchers try to reduce dependence on lexicon or annotation tasks in NLP. In this paper, we have focused on Markov Models as a stochastic approach to process NLP. A literature review was conducted to summarize research attempts with focusing on methods/techniques that used Mar-kov Models to process NLP, their advantages, and disadvantages. Most NLP research studies apply supervised models with the improvement of using Markov models to decrease the dependency on annotation tasks. Some others employed unsupervised solutions for reducing dependence on a lexicon or labeled datasets.

Cite

CITATION STYLE

APA

Almutiri, T., & Nadeem, F. (2022). Markov Models Applications in Natural Language Processing: A Survey. International Journal of Information Technology and Computer Science, 14(2), 1–16. https://doi.org/10.5815/ijitcs.2022.02.01

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free