An efficient approach to deal with the curse of dimensionality in sensitivity analysis computations

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

This paper deals with computations of sensitivity indices in global sensitivity analysis. Given a model y=f(x1,..,xk), where the k input factors xi's are uncorrelated with one another, one can see y as the realisation of a stochastic process obtained by sampling each of the xi's from its marginal distribution. The sensitivity indices are related to the decomposition of the variance of y into terms either due to each xi taken singularly, as well as into terms due to the cooperative effects of more than one. When the complete decomposition is considered, the number of sensitivity indices to compute is (2k-1), making the computational cost grow exponentially with k. This has been referred to as the curse of dimensionality and makes the complete decomposition unfeasible in most practical applications. In this paper we show that the information contained in the samples used to compute suitably defined subsets A of the (2k-1) indices can be used to compute the complementary subsets A* of indices, at no additional cost. This property allows reducing significantly the growth of the computational costs as k increases. © 2002 Springer-Verlag Berlin Heidelberg.

Cite

CITATION STYLE

APA

Ratto, M., & Saltelli, A. (2002). An efficient approach to deal with the curse of dimensionality in sensitivity analysis computations. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2329 LNCS, pp. 196–205). Springer Verlag. https://doi.org/10.1007/3-540-46043-8_19

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free