Objective verification of spatial precipitation forecasts

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Precipitation is surely one of the most important meteorological variables because of practical interest from the general public, hydrologists, power plant managers and other economic actors. Nevertheless, whoever works with numerical weather prediction (NWP) models knows how imperfect rainfall forecasts can be, especially at small scales. However, forecasters and NWP modelers, who care about the quality of their products, continuously strive to improve them. Moreover, considering the importance of NWP models for a wide range of end-users, their accuracy in forecasting precipitation must be verified in order to determine their quality and value. As recalled by Doswell (1996), although it should be obvious, a forecast not verified is a worthless forecast. Brier and Allen (1951) proposed three main reasons for forecast verification: administrative, economic and scientific. Often these three reasons go together. Indeed, it is necessary to communicate verification results in an effective way to the end-users, The first reason is the need for monitoring (e.g., see the ECMWF operational monitoring available online at http://www.ecmwf.int/ products/forecasts/guide/Monitoring-the-ECMWF-forecast-system.ht ml) an operational forecasting system in order to determine how well the system is performing (also considering changes in parameterization schemes, assimilation methods, configuration, etc.) and to guide possible future investments in the updating of weather forecast systems. The second reason is linked to the assessment of benefits of a correct forecast, from an economic point of view to decision-making activity or to particular end-user needs. Getting a good quality weather forecast is useful for civil protection, flooding risk management and agriculture. A last reason (but not the least!) to verify forecasts, involves examination of the forecast and the corresponding observations. Murphy et al. (1989) and Murphy and Winkler (1992) called this verification activity diagnostic. It allows for the evaluation of model outputs with respect to observations. Verification activities provide, this way, valuable feedback to operational weather forecasters, giving indications on how to improve NWP models. Indeed, quantitative precipitation forecast (QPF) skill is considered as an indicator of the general capability of a NWP model to produce a good forecast (Mesinger 1996). The standard verification techniques are based on the comparison of model outputs with observations (typically from rain gauges) valid at the same time and location. Detailed descriptions of such methods can be found in many books, such as Wilks (1995) and Jolliffe and Stephenson (2003). However, due to the difficulty of modeling the atmospheric processes related to rainfall (having, sometimes, short decorrelation lengths of about 520 km and high variability in space and time), it is not surprising that space-time distribution of a modeled precipitation field shows some differences from the real precipitation one. The resulting statistics can unjustly penalize high resolution models that make realistic forecasts of rainfall patterns but are shifted with respect to observations (Mass et al. 2002; Weygandt et al. 2004). In fact, high resolution models can actually reproduce precipitation patterns more accurately than coarse resolution ones, but they are often prone to displacement errors due to a variety of reasons (e.g. stochastic behavior of the atmosphere, lack of adequate initialization, difficulty to model microphysical processes), especially when convective precipitation is involved. Another important aspect of the verification activity is that verification results can depend upon the reliability of the observations. For instance, rain gauges give only point measurements, whereas areal estimates are needed to verify forecasts. Other ground-based or spacebased sensors can give estimates of the actual precipitation field at different spatial scales, but they may also be affected by large errors. Consequently, both rain gauges and sensors suffer from some limitations. We shall shortly illustrate some of these limitations in the next section. Hence, we have to treat the verification of precipitation fields with much more care than the verification of other well behaved meteorological variables, such as pressure and temperature. Visual verification could provide a valid representation of model performance, but it is time-consuming and personal biases may affect the model evaluation. An objective technique verifying precipitation events, much in the way a human would in a subjective evaluation, would likely produce a more reliable assessment of model performance. Differently from subjective verification, which is insufficient to verify many events, objective verification allows evaluating weather forecast systems and assessing variability on many time and space scales. The aim is to judge model performance taking into account the complexity of the problem, which is possible only with a lot of events. Objective verification is an on-going field of research and only some aspects can treated in a single chapter. Several new verification techniques have been recently developed by the meteorological community. These new methods involve, for instance, the use of the Fourier spectra analysis (e.g., Harris et al. 2001; Zepeda-Arce et al. 2000) or an object-oriented approach (e.g., Ebert and McBride 2000; Casati et al. 2004). We shall show a couple of applications of an object-oriented method, in particular the contiguous rain area (CRA) analysis (Ebert and McBride 2000). The CRA technique searches for disagreement between forecast and observed patterns. The displacement disagreement is obtained shifting the forecast rainfall pattern over the observed pattern until a best-fit criterion is satisfied. This criterion originally was the mean square error (MSE), especially when verification is about the forecast ability in matching the field maxima (e.g., Ebert and McBride 2000; Mariani et al. 2005), even though some authors (e.g., Tartaglione et al. 2005; Grams et al. 2006) have suggested also the correlation as best-fit criterion. Hereafter, with displacements we shall mean all those satisfying the best-fit criterion. The chapter is organized as follows. Section 17.2 is completely dedicated to discuss the issues of accuracy and representativeness of the rainfall observations. The authors feel that there is a need to care for observations in addition to the forecasts. This is particularly true for precipitation. In Sect. 17.3 an example is given of how observations can affect verification outcomes. Section 17.4 discusses the application of the CRA technique to a large number of precipitation events in order to define statistically robust and objective evaluation of location errors. Evaluation is performed in terms of two points of view: absolute (evaluation against observations) and comparative verification (model evaluation against observations and inter-model comparison). An example of such an analysis is shown in Sect. 17.5. Finally, conclusions are drawn in Sect. 17.6.

Cite

CITATION STYLE

APA

Tartaglione, N., Mariani, S., Accadia, C., Michaelides, S., & Casaioli, M. (2008). Objective verification of spatial precipitation forecasts. In Precipitation: Advances in Measurement, Estimation and Prediction (pp. 453–472). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-77655-0_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free