Is a Model’s Scatter Really “Very Small” or Is Model A Really “Performing Better” Than Model B?

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Many papers are published in which a dispersion model’s predictions are compared with field observations and/or with other models’ predictions. Standard model performance measures are used such as Fractional Bias (FB). Many times, subjective statements are made such as “The model has very small scatter” or “Model A is performing better than Model B”. About 30 years ago, we developed the BOOT model evaluation software, which has two main components: 1. Calculation of model performance measures such as FB; and 2. Calculation of confidence limits (e.g., 95%) on performance measures and on the difference in a performance measure between two models. Bootstrap or Jackknife resampling methods are employed. We briefly review the methodology in BOOT’s Component 2, which is seldom used by researchers. We present an example from a project where several urban puff models’ predictions are compared with JU2003 field data, and where assessments are carried out regarding whether, for example, it can be concluded, with 95% confidence, that the difference in FB for two models is not significantly different from zero.

Cite

CITATION STYLE

APA

Hanna, S., & Chang, J. (2020). Is a Model’s Scatter Really “Very Small” or Is Model A Really “Performing Better” Than Model B? In Springer Proceedings in Complexity (pp. 329–334). Springer. https://doi.org/10.1007/978-3-030-22055-6_52

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free