Depth statistics

92Citations
Citations of this article
36Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In 1975 John Tukey proposed a multivariate median which is the ‘deepest’ point in a given data cloud in ℝd. Later, in measuring the depth of an arbitrary point z with respect to the data, David Donoho and Miriam Gasko considered hyperplanes through z and determined its ‘depth’ by the smallest portion of data that are separated by such a hyperplane. Since then, these ideas have proved extremely fruitful. A rich statistical methodology has developed that is based on data depth and, more general, nonparametric depth statistics. General notions of data depth have been introduced as well as many special ones. These notions vary regarding their computability and robustness and their sensitivity to reflect asymmetric shapes of the data. According to their different properties they fit to particular applications. The upper level sets of a depth statistic provide a family of set-valued statistics, named depth-trimmed or central regions. They describe the distribution regarding its location, scale and shape. The most central region serves as a median. The notion of depth has been extended from data clouds, that is empirical distributions, to general probability distributions on ℝd, thus allowing for laws of large numbers and consistency results. It has also been extended from d-variate data to data in functional spaces.

Cite

CITATION STYLE

APA

Mosler, K. (2013). Depth statistics. In Robustness and Complex Data Structures: Festschrift in Honour of Ursula Gather (pp. 17–34). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-642-35494-6_2

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free