Slot filling is one of the critical tasks in modern conversational systems. The majority of existing literature employs supervised learning methods, which require labeled training data for each new domain. Zero-shot learning and weak supervision approaches, among others, have shown promise as alternatives to manual labeling. Nonetheless, these learning paradigms are significantly inferior to supervised learning approaches in terms of performance. To minimize this performance gap and demonstrate the possibility of open-domain slot filling, we propose a Self-supervised Co-training framework, called , that requires zero in-domain manually labeled training examples and works in three phases. Phase one acquires two sets of complementary pseudo labels automatically. Phase two leverages the power of the pre-trained language model BERT, by adapting it for the slot filling task using these sets of pseudo labels. In phase three, we introduce a self-supervised co-training mechanism, where both models automatically select high-confidence soft labels to further improve the performance of the other in an iterative fashion. Our thorough evaluations show that outperforms state-of-the-art models by 45.57% and 37.56% on SGD and MultiWoZ datasets, respectively. Moreover, our proposed framework achieves comparable performance when compared to state-of-the-art fully supervised models.
CITATION STYLE
Mosharrof, A., Fereidouni, M., & Siddique, A. B. (2023). Toward Open-domain Slot Filling via Self-supervised Co-training. In ACM Web Conference 2023 - Proceedings of the World Wide Web Conference, WWW 2023 (pp. 1928–1937). Association for Computing Machinery, Inc. https://doi.org/10.1145/3543507.3583541
Mendeley helps you to discover research relevant for your work.