Laughbot: Detecting humor in spoken language with language and audio cues

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose detecting and responding to humor in spoken dialogue by extracting language and audio cues and subsequently feeding these features into a combined recurrent neural network (RNN) and logistic regression model. In this paper, we parse Switchboard phone conversations to build a corpus of punchlines and unfunny lines where punchlines precede laughter tokens in Switchboard transcripts. We create a combined RNN and logistic regression model that uses both acoustic and language cues to predict whether a conversational agent should respond to an utterance with laughter. Our model achieves an F1-score of 63.2 and accuracy of 73.9. This model outperforms our logistic language model (F1-score 56.6) and RNN acoustic model (59.4) as well as the final RNN model of D. Bertero, 2016 (52.9). Using our final model, we create a “laughbot” that audibly responds to a user with laughter when their utterance is classified as a punchline. A conversational agent outfitted with a humor-recognition system such as the one we present in this paper would be valuable as these agents gain utility in everyday life.

Cite

CITATION STYLE

APA

Park, K., Hu, A., & Muenster, N. (2019). Laughbot: Detecting humor in spoken language with language and audio cues. In Advances in Intelligent Systems and Computing (Vol. 886, pp. 644–656). Springer Verlag. https://doi.org/10.1007/978-3-030-03402-3_45

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free