Distance induction in first order logic

45Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A distance on the problem domain allows one to tackle some typical goals of machine learning, e.g. classification or conceptual clustering, via robust data analysis algorithms (e.g. k-nearest neighbors or k-means). A method for building a distance on first-order logic domains is presented in this paper. The distance is constructed from examples expressed as definite or constrained clauses, via a two-step process: a set of d hypotheses is first learnt from the training examples. These hypotheses serve as new descriptors of the problem domain Λh: they induce a mapping π from, Ch onto the space of integers INd. The distance between any two examples E and F is finally defined as the Euclidean distance between π(E) and π(F). The granularity of this hypothesis-driven distance (HDD) is controlled via the user-supplied parameter d. The relevance of a HDD is evaluated from the predictive accuracy of the k-NN classifier based on this distance. Preliminary experiments demonstrate the potentialities of distance induction, in terms of predictive accuracy, computational cost, and tolerance to noise.

Cite

CITATION STYLE

APA

Sebag, M., & Sebag, M. (1997). Distance induction in first order logic. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 1297, pp. 264–272). Springer Verlag. https://doi.org/10.1007/3540635149_55

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free