Robust hyperplane fitting based on k-th power deviation and α-quantile

1Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, two methods for one-dimensional reduction of data by hyperplane fitting are proposed. One is least α-percentile of squares, which is an extension of least median of squares estimation and minimizes the α-percentile of squared Euclidean distance. The other is least k-th power deviation, which is an extension of least squares estimation and minimizes the k-th power deviation of squared Euclidean distance. Especially, for least k-th power deviation of 0 < k ≤ 1, it is proved that a useful property, called optimal sampling property, holds in one-dimensional reduction of data by hyperplane fitting. The optimal sampling property is that the global optimum for affine hyperplane fitting passes through N data points when an -dimensional hyperplane is fitted to the N-dimensional data. The performance of the proposed methods is evaluated by line fitting to artificial data and a real image. © 2011 Springer-Verlag.

Cite

CITATION STYLE

APA

Fujiki, J., Akaho, S., Hino, H., & Murata, N. (2011). Robust hyperplane fitting based on k-th power deviation and α-quantile. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6854 LNCS, pp. 278–285). https://doi.org/10.1007/978-3-642-23672-3_34

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free