Representation learning: Propositionalization and embeddings

6Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This monograph addresses advances in representation learning, a cutting-edge research area of machine learning. Representation learning refers to modern data transformation techniques that convert data of different modalities and complexity, including texts, graphs, and relations, into compact tabular representations, which effectively capture their semantic properties and relations. The monograph focuses on (i) propositionalization approaches, established in relational learning and inductive logic programming, and (ii) embedding approaches, which have gained popularity with recent advances in deep learning. The authors establish a unifying perspective on representation learning techniques developed in these various areas of modern data science, enabling the reader to understand the common underlying principles and to gain insight using selected examples and sample Python code. The monograph should be of interest to a wide audience, ranging from data scientists, machine learning researchers and students to developers, software engineers and industrial researchers interested in hands-on AI solutions.

Cite

CITATION STYLE

APA

Lavrač, N., Podpečan, V., & Robnik-Šikonja, M. (2021). Representation learning: Propositionalization and embeddings. Representation Learning: Propositionalization and Embeddings (pp. 1–163). Springer International Publishing. https://doi.org/10.1007/978-3-030-68817-2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free