Semantic Similarity as a Window into Vector- and Graph-Based Metrics

3Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

In this work, we use sentence similarity as a lens through which to investigate the representation of meaning in graphs vs. vectors. On semantic textual similarity data, we examine how similarity metrics based on vectors alone (SENTENCE-BERT and BERTSCORE) fare compared to metrics based on AMR graphs (SMATCH and S2MATCH). Quantitative and qualitative analyses show that the AMR-based metrics can better capture meanings dependent on sentence structures, but can also be distracted by structural differences-whereas the BERT-based metrics represent finer-grained meanings of individual words, but often fail to capture the ordering effect of words within sentences and suffer from interpretability problems. These findings contribute to our understanding of each approach to semantic representation and motivate distinct use cases for graph and vector-based representations.

Cite

CITATION STYLE

APA

Leung, W. C., Wein, S., & Schneider, N. (2022). Semantic Similarity as a Window into Vector- and Graph-Based Metrics. In GEM 2022 - 2nd Workshop on Natural Language Generation, Evaluation, and Metrics, Proceedings of the Workshop (pp. 106–115). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.gem-1.8

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free