MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid

37Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a mlti-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.

References Powered by Scopus

Deep residual learning for image recognition

178200Citations
N/AReaders
Get full text

Meta-Learning in Neural Networks: A Survey

1070Citations
N/AReaders
Get full text

Bootstrapping entity alignment with knowledge graph embedding

467Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Rethinking Uncertainly Missing and Ambiguous Visual Modality in Multi-Modal Entity Alignment

17Citations
N/AReaders
Get full text

Pseudo-Label Calibration Semi-supervised Multi-Modal Entity Alignment

9Citations
N/AReaders
Get full text

Triplet-aware graph neural networks for factorized multi-modal knowledge graph entity alignment

2Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Chen, Z., Chen, J., Zhang, W., Guo, L., Fang, Y., Huang, Y., … Chen, H. (2023). MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid. In MM 2023 - Proceedings of the 31st ACM International Conference on Multimedia (pp. 3317–3327). Association for Computing Machinery, Inc. https://doi.org/10.1145/3581783.3611786

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 2

50%

Professor / Associate Prof. 1

25%

Researcher 1

25%

Readers' Discipline

Tooltip

Computer Science 4

100%

Save time finding and organizing research with Mendeley

Sign up for free