Probabilistic evaluation of ensemble precipitation forecasts

0Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Weather forecast systems have to be evaluated and evaluation errors have to be quantified. Nowadays, limited-area numerical weather prediction systems provide meteorological forecasts with kilometerscale horizontal grid spacing. High resolution precipitation forecasts are of primary interest. For example, in flood forecasting systems the precipitation details are a crucial input parameter. Here, as an illustrative example, daily area-mean precipitation forecasts in Switzerland with a total area of 41,300 km2 and in Swiss mountainous catchments with a typical area as small as about 1,500 km2 shall be evaluated (cf. Fig. 1). Recently, ensemble prediction systems (EPS) became operational which predict forecast probabilities by integration of an ensemble of numerical weather prediction models from slightly different initial states and model parameters (Ehrendorfer 1997; Palmer 2000). The motivation for the EPS is that the spread in the ensemble forecasts indicates forecast uncertainty and the interpretation of the forecast probabilities provides better results than interpretation of one single deterministic forecast that is initiated by the best known but nonetheless uncertain atmospheric state. Zhu et al. (2002) showed with a simple cost-loss model that for most users the ensemble forecasts offer a higher economic value than the deterministic forecast. Here, EPS precipitation forecasts of the limited-area EPS COSMO-LEPS (Montani et al. 2003) with grid-spacing of 10 km are evaluated. The evaluation period covers the years 2005 and 2006 and the evaluation areas are Switzerland and three Swiss catchments (cf. Fig. 1). These three catchments are one pre-alpine catchment, the Thur and two alpine catchments, the Aare (part of an elongated wet anomaly extending along the northern rim of the Alps) and the Hinterrhein (relatively dry inner-alpine area). The most important ingredient of the evaluation of meteorological forecasts is the comparison with meteorological observations. But what is the best observational reference? Rain station data is commonly preferred to remote sensing data, in particular radar data, because of the relatively large measurement uncertainties especially in mountainous area (e.g., Young et al. 1999; Ciach et al. 2000; Adler et al. 2001). A typical distance between precipitation observation sites with daily observation frequency in the European Alps is 10 km and substantially more if near-real-time data is considered (cf. Fig.1 for the distribution of precipitation stations in Switzerland). This is a comparatively dense observation network but precipitation is a quantity with high spatial variability. Therefore, it is a valid question to ask if such a density of observations allows for evaluation of daily catchment precipitation forecasts. What is the uncertainty in observational estimates of catchment-mean precipitation and is the resulting evaluation uncertainty small enough to compare different versions of the EPS over reasonably short (e.g., three months) evaluation periods? Observational estimates of catchment-mean precipitation can be determined by a various set of methods. The simplest method is the approximation of the catchment-mean precipitation by the arithmetic mean of the in-catchment rain-station observations. More elaborate methods regionalize the observations and average the resulting precipitation field in the catchment. Regionalization can be made by some fitting approach yielding a precipitation analysis. For example, a recent analysis of precipitation for the European Alps by Frei and Schr (1998) has a time resolution of 24 h and a spatial grid of about 25 km with regionally even lower effective resolution depending on the available surface station network. This type of analysis is useful for model validation at the 100 km scale (see, e.g., Ahrens et al. 1998; Ferretti et al. 2000; Frei et al. 2003), but probably yields substantial evaluation uncertainties at smaller scales. The fitting analysis is a smoothing regionalization. This deteriorates the application in highermoment evaluation statistics if the network is not dense enough. The statement dense enough critically depends on the pixel support of the observations (what is the area an observation is representative for?) and the analysis scheme. Another regionalization approach is stochastic simulation of precipitation fields with conditioning on the available station data. The idea of this is that the data is respected and the spatial variability is represented more realistically than in the analysis. Additionally, an ensemble of observation based fields (i.e., observational references) can be simulated. Then the forecast can be compared with an ensemble of references which are equally valid realizations of precipitation fields given the available measurements. This allows for easy quantification of the evaluation uncertainty that is caused by the averaging uncertainty as will be shown below. A set of useful evaluation statistics has to be chosen. There are many of them discussed in the literature and the interested reader is referred to, for example, Murphy and Winkler (1987), Wilks (2006), Wilson (2001). For illustration, we apply a small set of skill scores only: the commonly used Brier Skill Score (BSS) and the recently developed Mutual Informations Skill Scores (MISs) (Ahrens and Walser 2007). Both skill scores assess the probability forecasts of dichotomous events (e.g., the probability of more than 10 mm precipitation in the period and area of interest). The observational reference is typically assumed to be certain: the observed event probability is either zero or one; and the uncertainty in the observed catchment precipitation is often neglected. Here, the uncertainty of rain station averaging to the catchment scale will be considered explicitly.

Cite

CITATION STYLE

APA

Ahrens, B., & Jaun, S. (2008). Probabilistic evaluation of ensemble precipitation forecasts. In Precipitation: Advances in Measurement, Estimation and Prediction (pp. 367–388). Springer Berlin Heidelberg. https://doi.org/10.1007/978-3-540-77655-0_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free