Categorical representation learning: morphism is all you need

6Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

We provide a construction for categorical representation learning and introduce the foundations of ‘categorifier’. The central theme in representation learning is the idea of everything to vector. Every object in a dataset S can be represented as a vector in Rn by an encoding map E : Obj(S) → Rn. More importantly, every morphism can be represented as a matrix E : Hom(S) → Rnn. The encoding map E is generally modeled by a deep neural network. The goal of representation learning is to design appropriate tasks on the dataset to train the encoding map (assuming that an encoding is optimal if it universally optimizes the performance on various tasks). However, the latter is still a set-theoretic approach. The goal of the current article is to promote the representation learning to a new level via a category-theoretic approach. As a proof of concept, we provide an example of a text translator equipped with our technology, showing that our categorical learning model outperforms the current deep learning models by 17 times. The content of the current article is part of a US provisional patent application filed by QGNai, Inc.

Cite

CITATION STYLE

APA

Sheshmani, A., & You, Y. Z. (2022). Categorical representation learning: morphism is all you need. Machine Learning: Science and Technology, 3(1). https://doi.org/10.1088/2632-2153/ac2c5d

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free