Seismic activity is routinely quantified using means in event rate or interevent timi. Standard estimates of the error on such mean values implicitly assume that the events used to calculate the mean are independent. However, earthquakes can be triggered by other events and are thus not necessarily independent. As a result, the errors on mean earthquake interevent times do not exhibit Gaussian convergence with increasing sample size according to the central limit theorem. In this paper we investigate how the errors decay with sample size in real earthquake catalogues and how the nature of this convergence varies with the spatial extent of the region under investigation. We demonstrate that the errors in mean interevent times, as a function of sample size, are well estimated by defining an effective sample size, using the autocorrelation function to estimate the number of pieces of independent data that exist in samples of different length. This allows us to accurately project error estimates from finite natural earthquake catalogues into the future and promotes a definition of stability wherein the autocorrelation function is not varying in time. The technique is easy to apply, and we suggest that it is routinely applied to define errors on mean interevent times as part of seismic hazard assessment studies. This is particularly important for studies that utilize small catalogue subsets (fewer than ∼1000 events) in time-dependent or high spatial resolution (e.g., for catastrophe modeling) hazard assessment. Copyright 2009 by the American Geophysical Union.
CITATION STYLE
Naylor, M., Main, I. G., & Touati, S. (2009). Quantifying uncertainty in mean earthquake interevent times for a finite sample. Journal of Geophysical Research: Solid Earth, 114(1). https://doi.org/10.1029/2008JB005870
Mendeley helps you to discover research relevant for your work.