Selecting the optimal sample fraction in univariate extreme value estimation

Citations of this article
Mendeley users who have this article in their library.


In general, estimators of the extreme value index of i.i.d. random variables crucially depend on the sample fraction that is used for estimation. In case of the well-known Hill estimator the optimal number knopt of largest order statistics was given by as a function of some parameters of the unknown distribution function F, which was assumed to admit a certain expansion. Moreover, an estimator of knopt was proposed that is consistent if a second-order parameter ρ of F belongs to a bounded interval. In contrast, we introduce a sequential procedure that yields a consistent estimator of knopt in the full model without requiring prior information about ρ. Then it is demonstrated that even in a more general setup the resulting adaptive Hill estimator is asymptotically as efficient as the Hill estimator based on the optimal number of order statistics. Finally, it is shown by Monte Carlo simulations that also for moderate sample sizes the procedure shows a reasonable performance, which can be improved further if ρ is restricted to bounded intervals.




Drees, H., & Kaufmann, E. (1998). Selecting the optimal sample fraction in univariate extreme value estimation. Stochastic Processes and Their Applications, 75(2), 149–172.

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free