CyberShake Study 17.3
CyberShake Study 17.3 is a computational study to calculate 2 CyberShake hazard models - one with a 1D velocity model, one with a 3D - at 1 Hz in a new region, CyberShake Central California. We will use the GPU implementation of AWP-ODC-SGT, the Graves & Pitarka (2014) rupture variations with 200m spacing and uniform hypocenters, and the UCERF2 ERF. The SGT and post-processing calculations will both be run on both NCSA Blue Waters and OLCF Titan.
Contents
- 1 Status
- 2 Data Products
- 3 Science Goals
- 4 Technical Goals
- 5 Sites
- 6 Velocity Models
- 7 Verification
- 8 Performance Enhancements (over Study 15.4)
- 9 Lessons Learned
- 10 Codes
- 11 Output Data Products
- 12 Computational and Data Estimates
- 13 Performance Metrics
- 14 Production Checklist
- 15 Presentations, Posters, and Papers
- 16 Related Entries
Status
Study 17.3 began on March 6, 2017 at 12:14:10 PST.
Study 17.3 completed on April 6, 2017 at 7:37:16 PDT.
Data Products
Hazard maps from Study 17.3 are available here: Study 17.3 Data Products
Hazard curves produced from Study 17.3 using the CCA 1D model have dataset ID=80, and with the CCA-06 3D model dataset ID=81.
Individual runs can be identified in the CyberShake database by searching for runs with Study_ID=8.
Science Goals
The science goals for Study 17.3 are:
- Expand CyberShake to include Central California sites.
- Create CyberShake models using both a Central California 1D velocity model and a 3D model (CCA-06).
- Calculate hazard curves for population centers, seismic network sites, and electrical and water infrastructure sites.
Technical Goals
The technical goals for Study 17.3 are:
- Run end-to-end CyberShake workflows on Titan, including post-processing.
- Show that the database migration improved database performance.
Sites
We will run a total of 438 sites as part of Study 17.3. A KML file of these sites, along with the Central and Southern California boxes, is available here (with names) or here (without names).
We created a Central California CyberShake box, defined here.
We have identified a list of 408 sites which fall within the box and outside of the CyberShake Southern California box. These include:
- 310 sites on a 10 km grid
- 54 CISN broadband or PG&E stations, decimated so they are at least 5 km apart, and no closer than 2 km from another station.
- 30 cities used by the USGS in locating earthquakes
- 4 PG&E pumping stations
- 6 historic Spanish missions
- 4 OBS stations
In addition, we will include 30 sites which overlap with the Southern California box (24 10 km grid, 5 5 km grid, 1 SCSN), enabling direct comparison of results.
We will prioritize the pumping stations and the overlapping sites.
Velocity Models
We are planning to use 2 velocity models in Study 17.3. We will enforce a Vs minimum of 900 m/s, a minimum Vp of 1600 m/s, and a minimum rho of 1600 kg/m^3.
- CCA-06, a 3D model created via tomographic inversion by En-Jui Lee. This model has no GTL. Our order of preference will be:
- CCA-06
- CVM-S4.26
- SCEC background 1D model
- CCA-1D, a 1D model created by averaging CCA-06 throughout the Central California region.
We will run the 1D and 3D model concurrently.
Verification
Since we are moving to a new region, we calculated GMPE maps for this region, available here: Central California GMPE Maps
As part of our verification work, we plan to do runs using both the 1D and 3D model for the following 3 sites in the overlapping region:
- s001
- OSI
- s169
Blue Waters/Titan Verification
We ran s001 on both Blue Waters and Titan. Hazard curves are below, and very closely match:
Below are two seismograms, which also closely match:
NT check
To verify that NT for the study (5000 timesteps = 437.5 sec) is long enough, I extracted a northern SAF seismogram for s1252, one of the southernmost sites to include northern SAF events. The seismogram is below; it tapers off around 350 seconds.
Study 15.4 Verification
We ran the USC site through the Study 16.9 code base on both Blue Waters and Titan with the Study 15.4 parameters. Hazard curves are below, and very closely match:
Plots of the seismograms show excellent agreement:
We accidentally ran with a depth of 50.4 km first. Here are seismogram plots illustrating the difference between running SGTs with depth of 40 km vs 50.4 km.
Velocity Model Verification
Cross-section plots of the velocity models are available here.
200 km cutoff effects
We are investigating the impact of the 200 km cutoff as it pertains to including/excluding northern SAF events. This is documented here: CCA N SAF Tests.
Impulse difference
Here's a curve comparison showing the impact of fixing the impulse for s001 at 3, 5, and 10 sec.
Curve Results
site | velocity model | 3 sec SA | 5 sec SA | 10 sec SA |
---|---|---|---|---|
s001 | 1D | |||
3D | ||||
OSI | 1D | |||
3D | ||||
s169 | 1D | |||
3D |
These results were calculated with the incorrect impulse.
site | velocity model | 3 sec SA | 5 sec SA | 10 sec SA |
---|---|---|---|---|
BKRS | 1D | |||
3D | ||||
SBR | 1D | |||
3D | ||||
PARK | 1D | |||
3D |
Velocity Profiles
site | CCA profile (min Vs=900 m/s) | CVM-S4.26 profile (min Vs=500 m/s) |
---|---|---|
s001 | ||
OSI | ||
s169 |
Seismogram plots
Below are plots comparing results from Study 15.4 to test results, which differ in velocity model, Vs min cutoff, dt, and grid spacing. We've selected two events: source 43, rupture 4, rupture variation 48, a M7.05 southern SAF event, and source 121, rupture 2, rupture variation 78, and M7.65 San Jacinto event.
site | San Andreas event | San Jacinto event |
---|---|---|
s001 | ||
OSI | ||
s169 |
Rupture Variation Generator v5.2.3 (Graves & Pitarka 2015)
Plots related to verifying the rupture variation generator used in this study are available here: Rupture Variation Generator v5.2.3 Verification
Performance Enhancements (over Study 15.4)
Responses to Study 15.4 Lessons Learned
* Some of the DirectSynth jobs couldn't fit their SGTs into the number of SGT handlers, nor finish in the wallclock time. In the future, test against a larger range of volumes and sites.
We aren't quite sure which site will produce the largest volume, so we will take the largest volume produced among the test sites and add 10% when choosing DirectSynth job sizes.
* Some of the cleanup jobs aren't fully cleaning up.
We have had difficulty reproducing this in a non-production environment. We will add a cron job to send a daily email with quota usage, so we'll know if we're nearing quota.
* On Titan, when a pilot job doesn't complete successfully, the dependent pilot jobs remain in a held state. This isn't reflected in qstat, so a quick look doesn't show that some of these jobs are being held and will never run. Additionally, I suspect that pilot jobs exit with a non-zero exit code when there's a pile-up of workflow jobs, and some try to sneak in after the first set of workflow jobs runs on the pilot jobs, meaning that the job gets kicked out for exceeding wallclock time. We should address this next time.
We're not going to use pilot jobs this time, so it won't be an issue.
* On Titan, a few of the PostSGT and MD5 jobs didn't finish in the 2 hours, so they had to be run on Rhea by hand, which has a longer permitted wallclock time. We should think about moving these kind of processing jobs to Rhea in the future.
The SGTs for Study 16.9 will be smaller, so these jobs should finish faster. PostSGT has two components, reformatting the SGTs and generating the headers. By increasing from 2 nodes to 4, we can decrease the SGT reformatting time to about 15 minutes, and the header generation also takes about 15 minutes. We investigated setting up the workflow to run the PostSGT and MD5 jobs only on Rhea, but had difficulty getting the rvgahp server working there. Reducing the runtime of the SGT reformatting and separating out the MD5 sum should fix this issue for this study.
* When we went back to do to CyberShake Study 15.12, we discovered that it was common for a small number of seismogram files in many of the runs to have an issue wherein some rupture variation records were repeated. We should more carefully check the code in DirectSynth responsible for writing and confirming correctness of output files, and possibly add a way to delete and recreate RVs with issues.
We've fixed the bug that caused this in DirectSynth. We have changed the insertion code to abort if duplicates are detected.
Workflows
- We will run CyberShake workflows end-to-end on Titan, using the RVGAHP approach with Condor rather than pilot jobs.
- We will bypass the MD5sum check at the start of post-processing if the SGT and post-processing are being run back-to-back on the same machine.
Database
- We have migrated data from past studies off the production database, which will hopefully improve database performance from Study 15.12.
Lessons Learned
- Include plots of velocity models as part of readiness review when moving to new regions.
- Formalize process of creating impulse. Consider creating it as part of the workflow based on nt and dt.
- Many jobs were not picked up by the reservation, and as a result reservation nodes were idle. Work harder to make sure reservation is kept busy.
Codes
Output Data Products
Below is a table of planned output products, both what we plan to compute and what we plan to put in the database.
Type of product | Periods/subtypes computed and saved in files | Periods/subtypes inserted in database |
---|---|---|
PSA | 88 values X and Y components at 44 periods: 10, 9.5, 9, 8.5, 8, 7.5, 7, 6.5, 6, 5.5, 5, 4.8, 4.6, 4.4, 4.2, 4, 3.8, 3.6, 3.4, 3.2, 2, 2.8, 2.6, 2.4, 2.2, 2, 1.66667, 1.42857, 1.25, 1.11111, 1, .66667, .5, .4, .33333, .285714, .25, .22222, .2, .16667, .142857, .125, .11111, .1 sec |
4 values Geometric mean for 4 periods: 10, 5, 3, 2 sec |
RotD | 44 values RotD100, RotD50, and RotD50 angle for 22 periods: 1.0, 1.2, 1.4, 1.5, 1.6, 1.8, 2.0, 2.2, 2.4, 2.6, 2.8, 3.0, 3.5, 4.0, 4.4, 5.0, 5.5, 6.0, 6.5, 7.5, 8.5, 10.0 sec |
12 values RotD100, RotD50, and RotD50 angle for 6 periods: 10, 7.5, 5, 4, 3, 2 sec |
Durations | 18 values For X and Y components, energy integral, Arias intensity, cumulative absolute velocity (CAV), and for both velocity and acceleration, 5-75%, 5-95%, and 20-80%. |
None |
Hazard Curves | N/A | 16 curves Geometric mean: 10, 5, 3, 2 sec RotD100: 10, 7.5, 5, 4, 3, 2 sec RotD50: 10, 7.5, 5, 4, 3, 2 sec |
Computational and Data Estimates
Computational Time
Since we are using a min Vs=900 m/s, we will use a grid spacing of 175 m, and dt=0.00875, nt=23000 in the SGT simulation (and 0.0875 in the seismogram synthesis).
For computing these estimates, we are using a volume of 420 km x 1160 km x 50 km, or 2400 x 6630 x 286 grid points. This is about 4.5 billion grid points, approximately half the size of the Study 15.4 typical volume. We will run the SGTs on 160-200 GPUs.
We estimate that we will run 75% of the sites from each model on Blue Waters, and 25% on Titan. This is because we are charged less for Blue Waters sites (we are charged for the Titan GPUs even if we don't use them), and we have more time available on Blue Waters. However, we will use a dynamic approach during runtime, so the resulting numbers may differ.
Study 15.4 SGTs took 740 node-hours per component. From this, we assume:
750 node-hours x (4.5 billion grid points in 16.9 / 10 billion grid points in 15.4) x ( 23k timesteps in 16.9 / 40k timesteps in 15.4 ) ~ 200 node-hours per component for Study 16.9.
Study 15.4 post-processing took 40k core-hrs. From this, we assume:
40k core-hrs x ( 2.3k timesteps in 16.9 / 4k timesteps in 15.4 ) = 23k core-hrs = 720 node-hrs on Blue Waters, 1440 node-hrs on Titan.
Because we're limited on Blue Waters node-hours, we will burn no more than 600K node-hrs. This should be enough to do 75% of the post-processing on Blue Waters. All the SGTs and the other post-processing will be done on Titan.
Titan
Pre-processing (CPU): 100 node-hrs/site x 876 sites = 87,600 node-hours.
SGTs (GPU): 400 node-hrs per site x 876 sites = 350,400 node-hours.
Post-processing (CPU): 1440 node-hrs per site x 219 sites = 315,360 node-hours.
Total: 28.3M SUs ((87,600 + 350,400 + 315,360) x 30 SUs/node-hr + 25% margin)
We have 91.7M SUs available on Titan.
Blue Waters
Post-processing (CPU): 720 node-hrs per site x 657 sites = 473,040 node-hours.
Total: 591.3K node-hrs ((473,040) + 25% margin)
We have 989K node-hrs available on Blue Waters.
Storage Requirements
We plan to calculate geometric mean, RotD values, and duration metrics for all seismograms. We will use Pegasus's cleanup capabilities to avoid exceeding quotas.
Titan
Purged space to store intermediate data products: (900 GB SGTs + 60 GB mesh + 900 GB reformatted SGTs)/site x 219 sites = 1591 TB
Purged space to store output data: (15 GB seismograms + 0.2 GB PSA + 0.2 GB RotD + 0.2 GB duration) x 219 sites = 3.3 TB
Blue Waters
Purged space to store intermediate data products: (900 GB SGTs)/site x 657 sites = 577 TB
Purged space to store output data: (15 GB seismograms + 0.2 GB PSA + 0.2 GB RotD + 0.2 GB duration) x 657 sites = 10.0 TB
SCEC
Archival disk usage: 13.3 TB seismograms + 0.1 TB PSA files + 0.1 TB RotD files + 0.1 TB duration files on scec-02 (has 109 TB free) & 24 GB curves, disaggregations, reports, etc. on scec-00 (109 TB free)
Database usage: (4 rows PSA [@ 2, 3, 5, 10 sec] + 12 rows RotD [RotD100 and RotD50 @ 2, 3, 4, 5, 7.5, 10 sec])/rupture variation x 500K rupture variations/site x 876 sites = 7 billion rows x 125 bytes/row = 816 GB (3.2 TB free on moment.usc.edu disk)
Temporary disk usage: 1 TB workflow logs. scec-02 has 94 TB free.
Performance Metrics
Starting Usage
Just before starting, we grabbed the current project usage for Blue Waters and Titan to get accurate measures of the SUs burned.
Titan
Project usage = 4,313,662 SUs in 2017 (153,289 in March)
User callag used 1,089,050 SUs in 2017 (98,606 in March)
Blue Waters
'usage' command reports project PRAC_bahm used 11,571.4 node-hrs and ran 321 jobs.
'usage' command reports scottcal used 11,320.49 node-hrs on PRAC_bahm.
Ending Usage
Titan
Project usage = 22,178,513 SUs in 2017 (12,586,176 in March, 5,431,964 in April)
User callag used 14,579,559 SUs in 2017 (10,128,109 in March, 3,461,006 in April)
Blue Waters
usage command reports 366773.73 node-hrs used by PRAC_bahm.
usage command reports 366504.84 node-hrs used by scottcal.
Reservations
4 reservations on Blue Waters (3 for 128 nodes, 1 for 124) began on 3/9/17 on 11:00:00 CST. These reservations expired after a week.
A 2nd set of reservations (same configuration) for 1 week started again on 3/16/17 at 22:00:00 CDT. We gave these reservations back on 3/20/17 at 11:14 CDT because shock went down and we couldn't keep them busy.
Another set of reservations started at 2:27 CDT on 3/29/17, but were revoked around 16:30 CDT on 3/30/17.
Events during Study
We requested and received a 5-day priority boost, 5 jobs running in bin 5, and 8 jobs eligible to run on Titan starting sometime in the morning of Sunday, March 12. This greatly increased our throughput.
In the evening of Wednesday, March 15, we increased GRIDMANAGER_MAX_JOBMANAGERS_PER_RESOURCE from 10 to 20 to increase the number of jobs in the Blue Waters queues.
Condor on shock was killed on 3/20/17 at 3:45 CDT. We started resubmitting workflows at 13:44 CDT.
In preparation for the USC downtime, we ran condor_off at 22:00 CDT on 3/27/17. We turned condor back on (condor_on) at 21:01 CDT on 3/29/17.
Blue Waters deleted the cron job, so we have no job statistics on Blue Waters from 3/25/17 at 17:00 to 3/28/17 at 00:00 (when Blue Waters went down), then again from 3/29/17 at 2:27 (when Blue Waters came back) until 3/29/17 at 21:19.
Application-level Metrics
- Makespan: 738.4 hrs (DST started during the run)
- Uptime: 691.4 hrs (downtime during BW and HPC system maintenance)
Note: uptime is what's used for calculating the averages for jobs, nodes, etc.
- 876 runs
- 876 pairs of SGTs generated on Titan
- 876 post-processing runs
- 145 performed on Titan (16.6%), 731 performed on Blue Waters (83.4%)
- 284,839,014 seismogram pairs generated
- 42,725,852,100 intensity measures generated (150 per seismogram pair)
- 15,581 jobs submitted
- 898,805.3 node-hrs used (21,566,832 core-hrs)
- 1026 node-hrs per site (24,620 core-hrs)
- On average, 12.1 jobs running, with a max of 41
- Average of 1295 nodes used, with a maximum of 5374.
- Total data generated: ?
- 370 TB SGTs generated (216 GB per single SGT)
- 777 TB intermediate data generated (44 GB per velocity file, duplicate SGTs)
- 10.7 TB output data
Delay per job (using a 7-day, no-restarts cutoff: 1618 workflows, 19143 jobs) was mean: 1836 sec, median: 467, min: 0, max: 91350, sd: 38209
Bins (sec) | 0 | 60 | 120 | 180 | 240 | 300 | 600 | 900 | 1800 | 3600 | 7200 | 14400 | 43200 | 86400 | 172800 | 259200 | 604800 | ||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Jobs per bin | 5511 | 1435 | 401 | 469 | 503 | 2082 | 1428 | 2612 | 2373 | 1390 | 567 | 302 | 68 | 2 | 0 | 0 |
- Application parallel node speedup (node-hours divided by makespan) was 1217x. (Divided by uptime: 1300x)
- Application parallel workflow speedup (number of workflows times average workflow makespan divided by application makespan) was 16.0x. (Divided by uptime: 17.1x)
Titan
- Wallclock time: 720.3 hrs
- Uptime: 628.0 hrs (all downtime was SCEC)
- SGTs generated for 876 sites, post-processing for 145 sites
- 13,334 jobs submitted to the Titan queue
- Running jobs: average 5.7, max of 25
- Idle jobs: average 11.1, max of 38
- Nodes: average 669 (10,704 cores), max 4406 (70,496 cores, 23.6% of Titan)
- Titan portal and Titan internal reporting agree: 13,490,509 SUs used (449,683.6 node-hrs)
Delay per job (7-day cutoff, no restarts, 11603 jobs): mean 2109 sec, median: 1106, min: 0, max: 91350, sd: 3818
Blue WatersThe Blue Waters cronjob which generated the queue information file was down for about 2 days, so we reconstructed its contents from the job history query functionality on the Blue Waters portal.
Workflow-level MetricsProduction Checklist
Presentations, Posters, and PapersRelated Entries |
---|