Appropriate kernel functions for support vector machine learning with sequences of symbolic data

19Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In classification problems, machine learning algorithms often make use of the assumption that (dis)similar inputs lead to (dis)similar outputs. In this case, two questions naturally arise: what does it mean for two inputs to be similar and how can this be used in a learning algorithm? In support vector machines, similarity between input examples is implicitly expressed by a kernel function that calculates inner products in the feature space. For numerical input examples the concept of an inner product is easy to define, for discrete structures like sequences of symbolic data however these concepts are less obvious. This article describes an approach to SVM learning for symbolic data that can serve as an alternative to the bag-of-words approach under certain circumstances. This latter approach first transforms symbolic data to vectors of numerical data which are then used as arguments for one of the standard kernel functions. In contrast, we will propose kernels that operate on the symbolic data directly. © Springer-Verlag Berlin Heidelberg 2005.

Cite

CITATION STYLE

APA

Vanschoenwinkel, B., & Manderick, B. (2005). Appropriate kernel functions for support vector machine learning with sequences of symbolic data. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 3635 LNAI, pp. 256–280). https://doi.org/10.1007/11559887_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free