Abstract
Proximities are at the heart of almost all machine learning methods. In a more generic view, objects are compared by a (symmetric) similarity or dissimilarity measure, which may not obey particular mathematical properties. This renders many machine learning methods invalid, leading to convergence problems and the loss of generalization behavior. In many cases, the preferred dissimilarity measure is not metric. If the input data are non-vectorial, like text sequences, proximity-based learning is used or embedding techniques can be applied. Standard embeddings lead to the desired fixed-length vector encoding, but are costly and are limited in preserving the full information. As an information preserving alternative, we propose a complex-valued vector embedding of proximity data, to be used in respective learning approaches. In particular, we address supervised learning and use extensions of prototype-based learning. The proposed approach is evaluated on a variety of standard benchmarks showing good performance compared to traditional techniques in processing non-metric or non-psd proximity data.
Author supplied keywords
Cite
CITATION STYLE
Münch, M., Straat, M., Biehl, M., & Schleif, F. M. (2021). Complex-Valued Embeddings of Generic Proximity Data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12644 LNCS, pp. 14–23). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-73973-7_2
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.