Because paleoseismology can extend the record of earthquakes back in time up to several millennia, it represents an opportunity to study how earthquakes recur through time and thus to provide innovative contributions to seismic hazard assessment. Based on a database of recurrence from paleoseismology we collected 19 sequences with 5 up to 14 dated events on a single fault. By using the age of the paleoearthquakes, with their associated uncertainty, and the historical earthquakes, we tested the null hypothesis that the observed inter-event times come from a uniform random distribution (Poisson model). We used the concept of likelihood for a specific sequence of events under a given occurrence model. The difference dlnL of the likelihoods estimated under two hypotheses gives an indication of which between the two hypotheses fits better the observations. To take into account the uncertainties, we used a Monte Carlo procedure computing the average and the standard deviation of dlnL for 1000 inter-event sets by choosing the occurrence time of each event within the limits of uncertainty and estimating the probability that a value equal to or larger than an observed dlnL comes by chance from a Poisson distribution of inter-event times. These tests were carried out for the Log-normal, Gamma, Weibull, Double-exponential and Brownian Passage Time (BPT) distributions. Our results show that a renewal model, associated with a time dependent hazard, and some kind of predictability of the next large earthquake on a fault is significantly better than a plain time-independent Poisson model only for four, out of the 19 sites examined in this study. The lack of regularity in the earthquake occurrence for more than 30% of the examined faults can be explained either by the large uncertainties in the estimate of paleoseismological occurrence times or by physical interaction between neighboring faults. © 2012 Elsevier B.V.
Mendeley saves you time finding and organizing research
Choose a citation style from the tabs below