Petascale Computing
SCEC/CME seeks to simulate ground motions at frequencies at 3Hz and higher. We are working to identify the critical physical scales that characterize this 3-Hz objective. Our current belief is that 3Hz deterministic simulations at regional scale are petascale calculations.
The highest frequency (fmax) at which deterministic 3D ground motion simulations is usually considered to be about 0.5 Hz. However, a rigorous analysis of the accuracy of deterministic ground motion prediction at higher frequencies has not been carried out to date, and it is an important goal of this proposal to test whether fmax can be pushed as high as 3 Hz. For simplicity, consider the 4th-order staggered-grid FD method using constant grid-size (dh) with at least 5 points per wavelength. We define a scale ratio of [outer scale/inner scale pts\]. The minimum (shear-wave) velocity (Vsmin) of about 150 m/s dictates dh=10 m at 3 Hz. A relatively small earthquake scenario with scale ratio of ~103 (i.e., the immediate vicinity around the 1994 M6.7 Northridge event) then requires 48 billion grid points. The M7.7 San Andreas fault TeraShake simulations (scale ratio of ~104) computed to 3-Hz would require 14 trillion grid points. It is possible, however, that the use of a variable-grid mesh (for example, using a FE method), may reduce these numbers somewhat.
The critical physical scales necessary for 3-Hz simulations are petascale problems. To illustrate this, assume a maximum turn-around time of a week for a ground motion simulation (the approximate attention span of a seismologist), a processor speed 4 times the present, and the availability of ~104 processors in 3-5 years, providing nearly 1 Pflops. With these assumptions, the smaller (Northridge) event just makes the cut. While the state-of-the-art 0.5-Hz TeraShake simulations, which took 4-5 days on 240 processors in y 2005, would only need less than a hour to complete, a 3Hz TeraShake simulation would still be beyond reach 3-5 years from present. However, a 1-2 Hz TeraShake simulation (scale ratio ~ 2 103) is feasible.
A number of IT issues will have to be addressed before the goals can be reached. A high degree of parallelization and the ability of effectively managing petabyte file systems will be fundamental to solving the problems using petascale computing. Despite impressive performance, the current codes will need to be significantly improved and optimized. The simulations will require not only a large processor count system, but also a very balanced system and a system with the ability to efficiently manage file systems across ~104 nodes. The clock speed of the processors, interconnect performance, memory bandwidth, and I/O performance will be key elements, and memory bandwidth and global network performance has to be balanced with processor performance.