Resource Constrained Dialog Policy Learning via Differentiable Inductive Logic Programming

3Citations
Citations of this article
64Readers
Mendeley users who have this article in their library.

Abstract

Motivated by the needs of resource constrained dialog policy learning, we introduce dialog policy via differentiable inductive logic (DILOG). We explore the tasks of one-shot learning and zero-shot domain transfer with DILOG on SimDial and MultiWoZ. Using a single representative dialog from the restaurant domain, we train DILOG on the SimDial dataset and obtain 99+% in-domain test accuracy. We also show that the trained DILOG zero-shot transfers to all other domains with 99+% accuracy, proving the suitability of DILOG to slot-filling dialogs. We further extend our study to the MultiWoZ dataset achieving 90+% inform and success metrics. We also observe that these metrics are not capturing some of the shortcomings of DILOG in terms of false positives, prompting us to measure an auxiliary Action F1 score. We show that DILOG is 100x more data efficient than state-of-the-art neural approaches on MultiWoZ while achieving similar performance metrics. We conclude with a discussion on the strengths and weaknesses of DILOG.

Cite

CITATION STYLE

APA

Zhou, Z., Beirami, A., Crook, P., Shah, P., Subba, R., & Geramifard, A. (2020). Resource Constrained Dialog Policy Learning via Differentiable Inductive Logic Programming. In COLING 2020 - 28th International Conference on Computational Linguistics, Proceedings of the Conference (pp. 6775–6787). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.coling-main.597

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free