Dominance and optimisation based on scale-invariant maximum margin preference learning

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

In the task of preference learning, there can be natural invariance properties that one might often expect a method to satisfy. These include (i) invariance to scaling of a pair of alternatives, e.g., replacing a pair (a, b) by (2a,2b); and (ii) invariance to rescaling of features across all alternatives. Maximum margin learning approaches satisfy such invariance properties for pairs of test vectors, but not for the preference input pairs, i.e., scaling the inputs in a different way could result in a different preference relation. In this paper we define and analyse more cautious preference relations that are invariant to the scaling of features, or inputs, or both simultaneously; this leads to computational methods for testing dominance with respect to the induced relations, and for generating optimal solutions among a set of alternatives. In our experiments, we compare the relations and their associated optimality sets based on their decisiveness, computation time and cardinality of the optimal set. We also discuss connections with imprecise probability.

Cite

CITATION STYLE

APA

Montazery, M., & Wilson, N. (2017). Dominance and optimisation based on scale-invariant maximum margin preference learning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 0, pp. 1209–1215). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2017/168

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free