Evaluation of performance Clinical Prediction Models

  • Steyerberg E
Citations of this article
Mendeley users who have this article in their library.
Get full text


When we develop or validate a prediction model, we want to quantify how good the predictions from the model are (model performance). Predictions are absolute risks, which go beyond assessments of relative risks, such as regression coefficients, odds ratios, or hazard ratios. We can distinguish apparent, internally validated, and externally validated model performance (Chap. 5). For all types of validation, we need performance criteria in line with the research questions, and different perspectives can be chosen. We first take the perspective that we want to quantify how close our predictions are to the actual outcome. Next, more specific questions can be asked about calibration and discrimination properties of the model, which are especially relevant for prediction of binary outcomes in individual patients. We will illustrate the use of performance measures in the testicular cancer case study, with model development in 544 patients, internal validation with bootstrapping, and external validation with 273 patients from another centre.




Steyerberg, E. W. (2009). Evaluation of performance Clinical Prediction Models. In Clinical Prediction Models (pp. 255–280). Springer US. https://doi.org/10.1007/978-0-387-77244-8_15

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free