Short poem generation (SPG): A performance evaluation of hidden markov model based on readability index and turing test

5Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

Abstract

We developed a Hidden Markov Model (HMM) that automatically generates short poem. The HMM was trained using the forward-backward algorithm also known as Baum Welch algorithm. The training process was exhausted by a hundreds of iterations through recursion method. Then we used the Viterbi algorithm to decode all the best possible hidden states to predict the next word, and from the previous predicted word, it will generate another word, then another word until it reaches the desire word length that was set in the program. Afterwards, the model was evaluated using several kinds of readability metrics index which measure the reading difficulty and comprehensiveness of the generated poem. Then, we performed a Turing Test, which participated by 75 college students, who are well versed in poetry. They determined if the generated poems was created by a human or a machine. Based from the evaluation results, the highest readability score index of the generated short poem is in the grade 16th level. While 69.2% of the participants in the Turing Test, agreed that most of the machine generated poems were likely created by some well-known poets and writers.

Cite

CITATION STYLE

APA

Tarnate, K. J. M., Garcia, M. M., & Sotelo-Bator, P. (2020). Short poem generation (SPG): A performance evaluation of hidden markov model based on readability index and turing test. International Journal of Advanced Computer Science and Applications, (2), 294–297. https://doi.org/10.14569/ijacsa.2020.0110238

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free