Enabling very-large scale earthquake simulations on parallel machines

22Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The Southern California Earthquake Center initiated a major large-scale earthquake simulation called TeraShake. The simulations propagated seismic waves across a domain of 600×300×80 km at 200 meter resolution, some of the largest and most detailed earthquake simulations of the southern San Andreas fault. The output from a single simulation may be as large as 47 terabytes of data and 400,000 files. The execution of these large simulations requires high levels of expertise and resource coordination. We describe how we performed single-processor optimization of the application, optimization of the I/O handling, and the optimization of execution initialization. We also look at the challenges presented by run-time data archive management and visualization. The improvements made to the application as it was recently scaled up to 40k BlueGene processors have created a community code that can be used by the wider SCEC community to perform large scale earthquake simulations. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Cui, Y., Moore, R., Olsen, K., Chourasia, A., Maechling, P., Minster, B., … Jordan, T. (2007). Enabling very-large scale earthquake simulations on parallel machines. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4487 LNCS, pp. 46–53). Springer Verlag. https://doi.org/10.1007/978-3-540-72584-8_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free