Abstract
In this paper, the problem of unlabeled video retrieval using textual queries is addressed. We present an extended dual encoding network which makes use of more than one encodings of the visual and textual content, as well as two different attention mechanisms. The latter serve the purpose of highlighting temporal locations in every modality that can contribute more to effective retrieval. The different encodings of the visual and textual inputs, along with early/late fusion strategies, are examined for further improving performance. Experimental evaluations and comparisons with state-of-the-art methods document the merit of the proposed network.
Author supplied keywords
Cite
CITATION STYLE
Galanopoulos, D., & Mezaris, V. (2020). Attention mechanisms, signal encodings and fusion strategies for improved Ad-Hoc video search with dual encoding networks. In ICMR 2020 - Proceedings of the 2020 International Conference on Multimedia Retrieval (pp. 336–340). Association for Computing Machinery. https://doi.org/10.1145/3372278.3390737
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.