Enriching Ontology with Temporal Commonsense for Low-Resource Audio Tagging

2Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Audio tagging aims at predicting sound events occurred in a recording. Traditional models require enormous laborious annotations, otherwise performance degeneration will be the norm. Therefore, we investigate robust audio tagging models in low-resource scenarios with the enhancement of knowledge graphs. Besides existing ontological knowledge, we further propose a semi-automatic approach that can construct temporal knowledge graphs on diverse domain-specific label sets. Moreover, we leverage a variant of relation-aware graph neural network, D-GCN, to combine the strength of the two knowledge types. Experiments on AudioSet and SONYC urban sound tagging datasets suggest the effectiveness of the introduced temporal knowledge, and the advantage of the combined KGs with D-GCN over single knowledge source.

Cite

CITATION STYLE

APA

Zhang, Z., Zhou, Z., Tang, H., Li, G., Wu, M., & Zhu, K. Q. (2021). Enriching Ontology with Temporal Commonsense for Low-Resource Audio Tagging. In International Conference on Information and Knowledge Management, Proceedings (pp. 3652–3656). Association for Computing Machinery. https://doi.org/10.1145/3459637.3482097

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free