Difference between revisions of "CyberShake MareNostrum Training"
(Created page with "This page provides training for running the CyberShake TEST site on MareNostrum (MN4) == Training overview == Here are the basic steps involved in the training: *Set up need...") |
(→Videos) |
||
(59 intermediate revisions by the same user not shown) | |||
Line 1: | Line 1: | ||
This page provides training for running the CyberShake TEST site on MareNostrum (MN4) | This page provides training for running the CyberShake TEST site on MareNostrum (MN4) | ||
+ | |||
+ | == Videos == | ||
+ | |||
+ | Recordings from the live training sessions are available below: | ||
+ | |||
+ | [https://g-c662a6.a78b8.36fe.data.globus.org/cybershake/training/Ch1_Full_2020-02-14T08_02.mp4 February 2, 2020 (1:04, 157 MB)]. Covers the overview, database, and jobs involved in SGT creation. | ||
+ | |||
+ | [https://g-c662a6.a78b8.36fe.data.globus.org/cybershake/training/Part2_2020-04-22.mp4 April 22, 2020 (1:12, 405 MB)]. Covers the post-processing jobs, data products jobs, and how to extend CyberShake to include new models. | ||
== Training overview == | == Training overview == | ||
− | Here are the basic steps involved in the training: | + | The goal of this training is to get you to run by hand all the steps involved in a CyberShake run. Here are the basic steps involved in the training: |
*Set up needed files | *Set up needed files | ||
Line 8: | Line 16: | ||
*Create Strain Green Tensors | *Create Strain Green Tensors | ||
*Create synthetic seismograms and intensity measures | *Create synthetic seismograms and intensity measures | ||
− | * | + | *Create final data products |
− | |||
− | |||
− | <b>Actions you need to take will be in bold.</ | + | <b>Actions you need to take will be in bold.</b> |
<pre> | <pre> | ||
− | Terminal commands and output will be in this font. Replace <username> or <working dir> with your username or your working directory, respectively. | + | Terminal commands and output will be in this font. |
+ | My username is pr1ejg10 and my project is pr1ejg00. Replace <username> or <working dir> with your username or your working directory, respectively. | ||
</pre> | </pre> | ||
− | + | == Set up needed files == | |
+ | |||
+ | <ol> | ||
+ | <li><b>Create a directory to work from. I recommend something in scratch.</b></li> | ||
+ | |||
+ | <pre> | ||
+ | pr1ejg10@login2:~> cd /gpfs/scratch/pr1ejg00/<username> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10> mkdir TEST | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10> cd TEST | ||
+ | </pre> | ||
An overview of the code involved in CyberShake is provided [[CyberShake_Code_Base|here]]. For this test, we are using the SGT-related codes, the PP-related codes, and the Data Products codes, but not the Stochastic codes. | An overview of the code involved in CyberShake is provided [[CyberShake_Code_Base|here]]. For this test, we are using the SGT-related codes, the PP-related codes, and the Data Products codes, but not the Stochastic codes. | ||
− | My CyberShake installation is located at: | + | Since MN4 does not permit outgoing connections, it is impossible to install CyberShake on MN4 directly from the repository. For the purposes of this training, I suggest you use my install directly. My CyberShake installation is located at: |
<pre> | <pre> | ||
/gpfs/projects/pr1ejg00/CyberShake | /gpfs/projects/pr1ejg00/CyberShake | ||
</pre> | </pre> | ||
− | == | + | Due to the outgoing connection problem, on MN4 we are using a local SQLite database rather than a remote MySQL database, which is what SCEC's CyberShake install uses. Each trainee should work from their own database. |
+ | |||
+ | <li><b>Copy in my training database from /gpfs/projects/pr1ejg00/CyberShake/database/training.sqlite.</b></li> | ||
+ | |||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> cp /gpfs/projects/pr1ejg00/CyberShake/database/training.sqlite . | ||
+ | </pre> | ||
+ | |||
+ | <li>In order to look at this database, we need sqlite. <b>Add the SQLite module to your environment.</b></li> | ||
+ | |||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> module load sqlite | ||
+ | </pre> | ||
+ | |||
+ | <li>Let's examine this database. <b>Use sqlite3 to investigate the tables.</b> | ||
+ | |||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> sqlite3 training.sqlite | ||
+ | SQLite version 3.20.0 2017-07-10 19:08:59 | ||
+ | Enter ".help" for usage hints. | ||
+ | sqlite> .tables | ||
+ | CyberShake_Runs IM_Types | ||
+ | CyberShake_Site_Regions PeakAmplitudes | ||
+ | CyberShake_Site_Ruptures Rupture_Variation_Scenario_IDs | ||
+ | CyberShake_Site_Types Rupture_Variation_Scenario_Metadata | ||
+ | CyberShake_Sites Rupture_Variations | ||
+ | ERF_IDs Ruptures | ||
+ | ERF_Metadata SGT_Variation_IDs | ||
+ | ERF_Probability_Models Studies | ||
+ | Hazard_Curve_Points Velocity_Model_Metadata | ||
+ | Hazard_Curves Velocity_Models | ||
+ | Hazard_Datasets | ||
+ | sqlite> .schema Velocity_Models | ||
+ | CREATE TABLE Velocity_Models | ||
+ | ( | ||
+ | Velocity_Model_ID integer primary key AUTOINCREMENT not null , | ||
+ | Velocity_Model_Name varchar(50) not null, | ||
+ | Velocity_Model_Version varchar(50) not null | ||
+ | ); | ||
+ | sqlite> select * from Velocity_Models; | ||
+ | 1|CVM-S4.26|4.26 | ||
+ | |||
+ | </pre> | ||
+ | I have pre-inserted some of the setup you'll need, such as Ruptures, Rupture_Variations, and CyberShake_Site_Ruptures. For queries and insertions, SQLite uses practically identical syntax to MySQL. To quit, type .quit. | ||
+ | |||
+ | Note that not all the tables defined in the full CyberShake schema are in this test database. | ||
+ | |||
+ | </ol> | ||
+ | |||
+ | == Initialize database with run information == | ||
+ | |||
+ | CyberShake keeps track of what we call 'runs' in the database. A run is a full CyberShake calculation for a single site. | ||
+ | |||
+ | When we run large CyberShake studies, we have scripts which automatically create runs as part of our workflows. For this test, we will populate the database ourselves. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Create a database entry for our test run.</b></li> | ||
+ | |||
+ | We will only specify the fields in the CyberShake_Runs table which are required. Some of them make less sense not in the workflow context. First, we will determine the site, velocity model, ERF, rupture variation scenario, and SGT variation IDs needed. | ||
+ | |||
+ | <pre> | ||
+ | sqlite> select * from CyberShake_Sites; | ||
+ | 1|CyberShake Verification Test - USC|TEST|34.0192|-118.286|1 | ||
+ | sqlite> select * from Velocity_Models; | ||
+ | 1|CVM-S4.26|4.26 | ||
+ | sqlite> select * from ERF_IDs; | ||
+ | 35|WGCEP (2007) UCERF2 - Single Branch|Mean UCERF 2 - Single Branch Earthquake Rupture Forecast FINAL|1|1 | ||
+ | 36|WGCEP (2007) UCERF2 - Single Branch 200m| Mean UCERF 2 - Single Branch Earthquake Rupture Forecast FINAL, 200m|1|1 | ||
+ | sqlite> select * from SGT_Variation_IDs; | ||
+ | 1|AWP_ODC_SGT|SGTs generated with AWP-ODC-SGT with Qp=Qs=10000 | ||
+ | 2|AWP_ODC_SGT GPU|SGTs generated with AWP-ODC-SGT GPU | ||
+ | sqlite> select * from Rupture_Variation_Scenario_IDs; | ||
+ | 1|36|genslip-v3.3.1b|Graves & Pitarka (2014) with uniform grid down dip hypocenter location, modified rupture variation constant | ||
+ | insert into CyberShake_Runs(Site_ID, ERF_ID, SGT_Variation_ID, Velocity_Model_ID, Rup_Var_Scenario_ID, Status, Status_Time, Last_User, Max_Frequency, Low_Frequency_Cutoff, SGT_Source_Filter_Frequency) values (1, 36, 1, 1, 1, "SGT Started", "2020-02-14 12:00:00", "<username>", 0.5, 0.5, 1.0); | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Determine the run ID, which you'll use for your test.</b></li> | ||
+ | <pre> | ||
+ | sqlite> select Run_ID from CyberShake_Runs; | ||
+ | 1 | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Exit sqlite.</b> We're done with it for now.</li> | ||
+ | <pre> | ||
+ | sqlite> .quit | ||
+ | </pre> | ||
+ | |||
+ | </ol> | ||
+ | |||
+ | == Create Strain Green Tensors == | ||
+ | |||
+ | As outlined [http://scec.usc.edu/scecwiki/images/2/24/SGT_workflow_stages.png here], there are 7 jobs we need to run to generate SGTs for our test site. | ||
+ | |||
+ | === PreCVM === | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#PreCVM | here]]. This stage may be modified for Iceland, since each site may end up using the same volume. Note that the volume dimensions must be evenly divisible by the number of cores in that dimension. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my PreCVM batch script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/precvm.slrm .</li> | ||
+ | |||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> cp /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/precvm.slrm . | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Edit the batch script.</b> Change the sqlite path (--server sqlite:...) to point to your sqlite file instead of mine.</li> | ||
+ | |||
+ | <li><b>Submit the job, using sbatch. When complete, make sure there are no errors in precvm.e .</b></li> | ||
+ | </ol> | ||
+ | |||
+ | === UCVM === | ||
+ | |||
+ | PreCVM must finish before you can run this stage. | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#UCVM | here]]. This stage will not be part of the Icelandic processing - you won't use UCVM to create a velocity model - but some other code will provide one. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my UCVM batch script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/ucvm.slrm .</li> | ||
+ | |||
+ | <li><b>Submit the job. When complete, make sure there are no errors in ucvm.e .</b>This is a parallel job, and may wait in the queue for some time before running.</li> | ||
+ | </ol> | ||
+ | |||
+ | === PreSGT === | ||
+ | |||
+ | PreCVM must finish before you can run this stage, but it can run concurrently with UCVM. | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#PreSGT | here]]. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my PreSGT batch script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/presgt.slrm .</li> | ||
+ | |||
+ | <li><b>Submit the job. When complete, make sure there are no errors in presgt.e .</b></li> | ||
+ | </ol> | ||
+ | |||
+ | === PreAWP === | ||
+ | |||
+ | UCVM and PreSGT must finish before you can run this stage. Note that we're skipping the Smoothing step, since we're only using a single velocity model. | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#PreAWP | here]]. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my PreAWP batch script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/preawp.slrm .</li> | ||
+ | |||
+ | <li><b>Edit the script.</b> Change '--run-id 1' to use your correct run id, which might not be 1.</li> | ||
+ | |||
+ | <li><b>Submit the job. When complete, make sure there are no errors in preawp.e .</b></li> | ||
+ | </ol> | ||
+ | |||
+ | === SGT === | ||
+ | |||
+ | PreAWP must finish before you can run this stage. | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#AWP-ODC-SGT.2C_CPU_version | here]]. | ||
+ | |||
+ | We run two SGTs, one for each horizontal component. They can run concurrently. PreAWP set up the input files needed for both horizontal components. If you'd like to run the vertical also, you'd need to make changes to PreAWP. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my SGT x and SGT y batch scripts</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/awp_x.slrm and awp_y.slrm .</li> | ||
+ | |||
+ | <li><b>Submit the job. When complete, check awp_x.e and awp_y.e .</b></li> | ||
+ | We usually see lines like: | ||
+ | <pre> | ||
+ | Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL | ||
+ | </pre> | ||
+ | |||
+ | This is OK and is not an error. | ||
+ | |||
+ | This job runs on 1700 cores, so you may need to wait in the queue for a while before it runs. | ||
+ | </ol> | ||
+ | |||
+ | === NaNCheck === | ||
+ | |||
+ | We run two of these as well, one for each horizontal component. The SGT for that component must finish before you can run this stage. | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#NanCheck | here]]. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my check batch scripts</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/check_x.slrm and check_y.slrm .</li> | ||
+ | |||
+ | <li><b>Submit the jobs. When complete, make sure there are no errors in check_x.e or check_y.e .</b></li> | ||
+ | </ol> | ||
+ | |||
+ | === PostAWP === | ||
+ | |||
+ | We also run two of these. The SGT for the component must finish before you can run this stage, but it can run concurrently with NaNCheck. | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#PostAWP | here]]. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my post batch scripts</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post_x.slrm and post_y.slrm .</li> | ||
+ | |||
+ | <li><b>Change the RUN_ID variable to be set to your run ID.</b></li> | ||
+ | |||
+ | <li><b>Submit the jobs. When complete, make sure there are no errors in post_x.e or post_y.e .</b></li> | ||
+ | </ol> | ||
+ | |||
+ | Congratulations, you have generated a pair of SGTs! | ||
+ | |||
+ | == Post-processing == | ||
+ | |||
+ | The post-processing steps are illustrated in [http://scec.usc.edu/scecwiki/images/7/76/PP_workflow_stages.png this diagram]. Since you're not moving the SGTs between systems, we can skip the md5sum check stage, so we only have one stage to run. | ||
+ | |||
+ | Details about DirectSynth, the seismogram synthesis stage, are available [[CyberShake_Code_Base#DirectSynth | here ]]. | ||
+ | |||
+ | This step requires a bit more setup than the previous ones. We must provide a list of all the ruptures which need their seismograms synthesized, and typically I symlink both the rupture geometry files (from which rupture variations are generated), the SGT files, and their headers. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Create a directory for the post-processing inside your working directory.</b></li> | ||
+ | |||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> mkdir post-processing | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> cd post-processing | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Copy in the rupture file list</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/rupture_file_list_TEST .</li> | ||
+ | |||
+ | <li><b>Symlink the *.sgt and *.sgthead files you created earlier into the post-processing directory.</b></li> | ||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fx_<run id>.sgt | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fx_<run id>.sgthead | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fy_<run id>.sgt | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fy_<run id>.sgthead | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Create symlinks to all the rupture geometry files.</b> You can use my helper script, /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/make_lns.py.</li> | ||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/make_lns.py rupture_file_list_TEST /gpfs/projects/pr1ejg00/CyberShake/ruptures/Ruptures_erf36 | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Copy in the batch script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/run_ds.slrm .</li> | ||
+ | |||
+ | <li><b>Edit run_ds.slrm so RUN_ID uses the correct run id.</b></li> | ||
+ | |||
+ | <li><b>Run the job.</b> This one may also take a while in the queue.</li> | ||
+ | |||
+ | <li><b>Check two places for errors.</b> Check both the end of ds.e, and also the end of log.84. At the end of log.84 you should see something like:</li> | ||
+ | <pre> | ||
+ | Jan 24 20:55:45.870309> Sending complete message to process 83. | ||
+ | Jan 24 20:55:45.870318> Sending message to processor 83. | ||
+ | Jan 24 20:55:45.870326> Shutting down. | ||
+ | </pre> | ||
+ | |||
+ | When complete, you will have 395 Seismogram, PeakVals, RotD, and Duration files. File formats are described in detail [[CyberShake_Data#CyberShake_output_data | here]]. | ||
+ | |||
+ | </ol> | ||
+ | |||
+ | == Data Product generation == | ||
+ | |||
+ | Once the seismograms and IM files are generated, the IMs can be put into the database and a variety of data products generated. The jobs involved in the typical CyberShake workflow are illustrated in [http://scec.usc.edu/scecwiki/images/4/44/Data_workflow_stages.png this diagram]. | ||
+ | |||
+ | For this test, we aren't going to run all the jobs, just the Load Amps, the Check, and the Curve Calc for the RotD files. | ||
+ | |||
+ | === Load Amps === | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#Load_Amps | here]]. | ||
+ | |||
+ | We will point to the SQLite database for database population. I've created a helper script for this purpose. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Change directories</b> back to your top-level working directory, not the post-processing one.</li> | ||
+ | <pre> | ||
+ | pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> cd .. | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Copy over my helper script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/db_insert.sh .</li> | ||
+ | |||
+ | <li><b>Edit the script.</b> Change '-server sqlite:...' to point to your training.sqlite file that you used earlier in the training.</li> | ||
+ | |||
+ | <li><b>Run the script.</b> It will print output as it goes.</li> | ||
+ | </ol> | ||
+ | |||
+ | === Check DB === | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#Check_DB_Site | here]]. Basically, this task verifies that all the IMs were correctly inserted into the database. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Copy over my helper script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/check_db.sh .</li> | ||
+ | |||
+ | <li><b>Edit the script.</b> You'll need to change both the sqlite file it points to, and also the run id (the -r argument).</li> | ||
+ | |||
+ | <li><b>Run the script.</b> It will print output as it goes. Check for errors.</li> | ||
+ | </ol> | ||
+ | |||
+ | === Curve Calc === | ||
+ | |||
+ | Details about this stage are available [[CyberShake_Code_Base#Curve_Calc | here ]]. | ||
+ | |||
+ | Before we can calculate curves, we need to have a corresponding Hazard_Dataset in the database. This only needs to be done once for a given combination of velocity model, SGT variation, rupture scenario, and ERF. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Create an appropriate Hazard Dataset.</b> First, get the IDs you'll need from your CyberShake_Runs table, then use it to populate Hazard Datasets.</li> | ||
+ | <pre> | ||
+ | sqlite> select ERF_ID, Rup_Var_Scenario_ID, SGT_Variation_ID, Velocity_Model_ID from CyberShake_Runs where Run_ID=<run id>; | ||
+ | 36|1|1|1 | ||
+ | sqlite> insert into Hazard_Datasets (ERF_ID, Rup_Var_Scenario_ID, SGT_Variation_ID, Velocity_Model_ID, Prob_Model_ID, Time_Span_ID, Max_Frequency, Low_Frequency_Cutoff) values (36, 1, 1, 1, 1, 1, 0.5, 0.5) | ||
+ | </pre> | ||
+ | |||
+ | <li><b>Copy over my helper script</b> from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/curve_calc.sh .</li> | ||
+ | |||
+ | <li><b>Edit the script.</b> Replace the first argument, the path to the sqlite file (/gpfs/projects/pr1ejg00/CyberShake/database/test.sqlite in mine) with the path to your database. Also, replace the --run-id with the correct one for your test.</li> | ||
+ | |||
+ | <li><b>Run the script.</b> It will print output as it goes.</li> | ||
+ | </ol> | ||
+ | |||
+ | Congratulations! You have calculated CyberShake hazard curves! You can access them either by looking at the *.pdf and *.png files you've created, or by using sqlite to examine the values in the database directly. | ||
+ | |||
+ | == Verification == | ||
+ | |||
+ | To confirm that you got the right answers, compare one of your hazard curves with the one I generated. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>Use sqlite to load your database.</b></li> | ||
+ | |||
+ | <li><b>Run a query to get your 3 sec RotD50 curve</b>. | ||
+ | <pre>sqlite> select P.X_Value, P.Y_Value from Hazard_Curve_Points P, Hazard_Curves C where C.Run_ID=1 and C.Hazard_Curve_ID=P.Hazard_Curve_ID and C.IM_Type_ID=162;</pre></li> | ||
+ | |||
+ | <li><b>Compare your curve to mine</b>. Ideally the values will match exactly, but in general anything with differences of O(0.01%) is fine. For the 3 second RotD50 curve, the reference values (the values I got when I ran the code) are: | ||
+ | |||
+ | {| border="1" cellpadding="3" | ||
+ | ! X value !! Y value | ||
+ | |- | ||
+ | | 0.005||0.00669446347113578 | ||
+ | |- | ||
+ | |0.007||0.00669446347113578 | ||
+ | |- | ||
+ | |0.0098||0.00668048983780867 | ||
+ | |- | ||
+ | |0.0137||0.00665296801989335 | ||
+ | |- | ||
+ | |0.0192||0.00651297322733557 | ||
+ | |- | ||
+ | |0.0269||0.00630546899968387 | ||
+ | |- | ||
+ | |0.0376||0.00576024330858993 | ||
+ | |- | ||
+ | |0.0527||0.00509651492213958 | ||
+ | |- | ||
+ | |0.0738||0.00416060395812046 | ||
+ | |- | ||
+ | |0.103||0.00279691975696716 | ||
+ | |- | ||
+ | |0.145||0.00166295938324768 | ||
+ | |- | ||
+ | |0.203||0.000834190556129544 | ||
+ | |- | ||
+ | |0.284||0.000362477682089857 | ||
+ | |- | ||
+ | |0.397||0.0001266222803048 | ||
+ | |- | ||
+ | |0.556||2.71102060581674e-05 | ||
+ | |- | ||
+ | |0.778||4.75466199389984e-06 | ||
+ | |- | ||
+ | |1.09||1.86439212512823e-07 | ||
+ | |- | ||
+ | |1.52||0.0 | ||
+ | |- | ||
+ | |2.13||0.0 | ||
+ | |} | ||
+ | </li></ol> | ||
+ | |||
+ | == Extending CyberShake == | ||
+ | |||
+ | Once you have the basic CyberShake install configured, you can augment it with additional data for a new model or region. Below are details about what needs to be done to include new data. | ||
+ | |||
+ | === Earthquake Rupture Forecast === | ||
+ | |||
+ | Adding a new earthquake rupture forecast require several different modifications. | ||
+ | |||
+ | ==== Ruptures ==== | ||
+ | |||
+ | First, you should add information for the new ruptures (the fault surfaces). This consists of both database additions and the creation of rupture geometry files. | ||
+ | |||
+ | <ol> | ||
+ | <li>Add new entries to the database. | ||
+ | <ol> | ||
+ | <li><b>Create a new ERF in the ERF_IDs table.</b> Provide a name, description, and default probability model (cross-referenced with the ERF_Probability_Models table) and default time span (cross-referenced with the Time_Spans table).</li> | ||
+ | <li><b>Optionally, add metadata to the ERF_Metadata table.</b> This can be used to track certain ERF parameters you might want to access individually.</li> | ||
+ | <li><b>Populate the Ruptures table.</b> In California CyberShake, we use the UCERF2 conventions for source and rupture. A source is a named fault section, usually with a consistent fault geometry, and a rupture has an assigned probability and magnitude. For each entry in the table, you will need to specify the following: | ||
+ | <ul> | ||
+ | <li>Identifying information: source id, rupture id, name. The source id and rupture id are how CyberShake tracks individual ruptures, so the (source id, rupture id) tuples must be unique.</li> | ||
+ | <li>Magnitude</li> | ||
+ | <li>Probability. Probabilities of individual events are calculated by dividing the rupture probability by the number of variations for the rupture.</li> | ||
+ | <li>Information about the rupture geometry. In CyberShake, rupture surfaces are represented by a grid of points. The seismograms are simulated by calculating the contribution from each point as a point source and then combined. Specifically, the database tracks grid spacing (spacing between rupture surface points), number of rows, number of columns, and total points.</li> | ||
+ | <li>The geographic extent of the rupture is needed so CyberShake can calculate how large the simulation volume must be. Thus, the database stores the 3D coordinates of opposite corners of the rupture surface. It doesn't matter which pair of opposite corners is used, or in what order.</li> | ||
+ | </ul> | ||
+ | </li> | ||
+ | </ol> | ||
+ | </li> | ||
+ | <li><b>Create rupture geometry files, one per rupture.</b> These files should follow the format specified [[CyberShake_Rupture_Files | here]]. The values in the headers of these files should match the values found in the database. I recommend using the UCERF2 approach for rectangular rupture surfaces, and the RSQSim one for triangular or irregular meshes.</li> | ||
+ | <li><b>Stage the rupture geometry files.</b> The root directory for rupture files is specified as RUPTURE_ROOT in the cybershake.cfg file, in <CyberShake root dir>/software/cybershake.cfg. CyberShake assumes that the rupture files for ERF X are stored in ${RUPTURE_ROOT}/Ruptures_erf<X>/<source id>/<rupture id>/<source id>_<rupture id>.txt , so you will want to copy your rupture geometry files there.</li> | ||
+ | </ol> | ||
+ | |||
+ | ==== Rupture Variations ==== | ||
+ | |||
+ | CyberShake supports two modes of synthesizing rupture variations: | ||
+ | #Rupture geometries, but no explicit slip time histories, are provided. The rupture geometries are passed through a Graves & Pitarka rupture generator, which is integrated with the DirectSynth code, to generate individual realizations, which are then used to perform the seismogram syntheis. | ||
+ | #Both rupture geometries and slip time histories are provided. The rupture generator is bypassed, and each individual slip time history is used to produce a seismogram. | ||
+ | |||
+ | From our conversations, I believe the plan is to use approach #1, so I will outline the steps involved there in setting up the database. | ||
+ | |||
+ | <ol> | ||
+ | <li><b>If using a new GP generator, create a rupture variation scenario entry in the Rupture_Variation_Scenario_IDs table.</b> This is used to track what version and parameters of the GP rupture generator code are used, since the same ERF could be used with multiple rupture variations. You will need to specify a name and a description.</li> | ||
+ | <li><b>If using a new GP generator, add any necessary metadata to the Rupture_Variation_Scenario_Metadata.</b> This is useful for verification and reproducibility.</li> | ||
+ | <li><b>Add new entries to the Rupture_Variations table.</b> This table is used by CyberShake to determine which rupture variations go with a given ERF, rupture variation scenario, source, and rupture. Every rupture variation for a given ERF and scenario will have a unique (source id, rupture id, rupture variation id) tuple. You should also include the Rup_Var_LFN, which stands for 'Logical File Name', and should be something unique -- for example, 'e36_rv6_128_1.txt.variation-r000000' for ERF 36, rupture variation scenario id 6, source 128, rupture 1, rupture variation 0. Including the hypocenter locations is optional.</li> | ||
+ | </ol> | ||
+ | |||
+ | === Sites === | ||
+ | |||
+ | Adding new sites is straightforward. | ||
+ | |||
+ | <ul> | ||
+ | <li><b>Create a new entry in the CyberShake_Sites table for each site.</b> You'll need to specify the site name, a 5-character or less short name, the latitude and longitude, and an optional site type, which cross-references with the CyberShake_Site_Types table.</li> | ||
+ | <li><b>Populate the CyberShake_Site_Ruptures table.</b> This table is used to determine which ruptures should be simulated for a given site. CyberShake internally applies no cutoffs, so any ruptures included in this table will be simulated. For each site, you should add a row for each rupture that should be included when running CyberShake simulations for that site.</li> | ||
+ | <li>The CyberShake_Site_Regions table is no longer used -- regions are dynamically calculated from the rupture geometries -- so it doesn't need to be populated.</li> | ||
+ | </ul> | ||
+ | |||
+ | === Velocity Models === | ||
− | + | There are two main approaches to adding new velocity models. | |
− | + | <ul> | |
+ | <li>Register a new model into UCVM. This requires multiple manual steps, and should be done in coordination with the SCEC software group.</li> | ||
+ | <li>Create a new execution stage which can replace UCVM. For a simple model, this is easier than integrating it into UCVM. Some things to keep in mind: | ||
+ | <ul> | ||
+ | <li>The generated mesh should be in [[CyberShake_Code_Base#AWP_format | AWP format]] to interface with the SGT code.</li> | ||
+ | <li>If only one model is being used, the smoothing stage can be skipped.</li> | ||
+ | <li>If not guaranteed, the mesh should be post-processed to verify that all Vs values are above the minimum (and if adjustments are necessary, that Vp and rho are scaled).</li> | ||
+ | <li>For all points in the mesh, Vp/Vs >= 1.45 for numerical convergence.</li> | ||
+ | </ul> | ||
+ | </li> | ||
+ | </ul> | ||
− | < | + | With either approach, be sure to <b>add the new velocity model into the Velocity_Models table</b>. |
Latest revision as of 05:33, 19 January 2024
This page provides training for running the CyberShake TEST site on MareNostrum (MN4)
Contents
Videos
Recordings from the live training sessions are available below:
February 2, 2020 (1:04, 157 MB). Covers the overview, database, and jobs involved in SGT creation.
April 22, 2020 (1:12, 405 MB). Covers the post-processing jobs, data products jobs, and how to extend CyberShake to include new models.
Training overview
The goal of this training is to get you to run by hand all the steps involved in a CyberShake run. Here are the basic steps involved in the training:
- Set up needed files
- Initialize database with run information
- Create Strain Green Tensors
- Create synthetic seismograms and intensity measures
- Create final data products
Actions you need to take will be in bold.
Terminal commands and output will be in this font. My username is pr1ejg10 and my project is pr1ejg00. Replace <username> or <working dir> with your username or your working directory, respectively.
Set up needed files
- Create a directory to work from. I recommend something in scratch.
- Copy in my training database from /gpfs/projects/pr1ejg00/CyberShake/database/training.sqlite.
- In order to look at this database, we need sqlite. Add the SQLite module to your environment.
- Let's examine this database. Use sqlite3 to investigate the tables.
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> sqlite3 training.sqlite SQLite version 3.20.0 2017-07-10 19:08:59 Enter ".help" for usage hints. sqlite> .tables CyberShake_Runs IM_Types CyberShake_Site_Regions PeakAmplitudes CyberShake_Site_Ruptures Rupture_Variation_Scenario_IDs CyberShake_Site_Types Rupture_Variation_Scenario_Metadata CyberShake_Sites Rupture_Variations ERF_IDs Ruptures ERF_Metadata SGT_Variation_IDs ERF_Probability_Models Studies Hazard_Curve_Points Velocity_Model_Metadata Hazard_Curves Velocity_Models Hazard_Datasets sqlite> .schema Velocity_Models CREATE TABLE Velocity_Models ( Velocity_Model_ID integer primary key AUTOINCREMENT not null , Velocity_Model_Name varchar(50) not null, Velocity_Model_Version varchar(50) not null ); sqlite> select * from Velocity_Models; 1|CVM-S4.26|4.26
I have pre-inserted some of the setup you'll need, such as Ruptures, Rupture_Variations, and CyberShake_Site_Ruptures. For queries and insertions, SQLite uses practically identical syntax to MySQL. To quit, type .quit.
Note that not all the tables defined in the full CyberShake schema are in this test database.
pr1ejg10@login2:~> cd /gpfs/scratch/pr1ejg00/<username> pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10> mkdir TEST pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10> cd TEST
An overview of the code involved in CyberShake is provided here. For this test, we are using the SGT-related codes, the PP-related codes, and the Data Products codes, but not the Stochastic codes.
Since MN4 does not permit outgoing connections, it is impossible to install CyberShake on MN4 directly from the repository. For the purposes of this training, I suggest you use my install directly. My CyberShake installation is located at:
/gpfs/projects/pr1ejg00/CyberShake
Due to the outgoing connection problem, on MN4 we are using a local SQLite database rather than a remote MySQL database, which is what SCEC's CyberShake install uses. Each trainee should work from their own database.
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> cp /gpfs/projects/pr1ejg00/CyberShake/database/training.sqlite .
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> module load sqlite
Initialize database with run information
CyberShake keeps track of what we call 'runs' in the database. A run is a full CyberShake calculation for a single site.
When we run large CyberShake studies, we have scripts which automatically create runs as part of our workflows. For this test, we will populate the database ourselves.
- Create a database entry for our test run. We will only specify the fields in the CyberShake_Runs table which are required. Some of them make less sense not in the workflow context. First, we will determine the site, velocity model, ERF, rupture variation scenario, and SGT variation IDs needed.
- Determine the run ID, which you'll use for your test.
- Exit sqlite. We're done with it for now.
sqlite> select * from CyberShake_Sites; 1|CyberShake Verification Test - USC|TEST|34.0192|-118.286|1 sqlite> select * from Velocity_Models; 1|CVM-S4.26|4.26 sqlite> select * from ERF_IDs; 35|WGCEP (2007) UCERF2 - Single Branch|Mean UCERF 2 - Single Branch Earthquake Rupture Forecast FINAL|1|1 36|WGCEP (2007) UCERF2 - Single Branch 200m| Mean UCERF 2 - Single Branch Earthquake Rupture Forecast FINAL, 200m|1|1 sqlite> select * from SGT_Variation_IDs; 1|AWP_ODC_SGT|SGTs generated with AWP-ODC-SGT with Qp=Qs=10000 2|AWP_ODC_SGT GPU|SGTs generated with AWP-ODC-SGT GPU sqlite> select * from Rupture_Variation_Scenario_IDs; 1|36|genslip-v3.3.1b|Graves & Pitarka (2014) with uniform grid down dip hypocenter location, modified rupture variation constant insert into CyberShake_Runs(Site_ID, ERF_ID, SGT_Variation_ID, Velocity_Model_ID, Rup_Var_Scenario_ID, Status, Status_Time, Last_User, Max_Frequency, Low_Frequency_Cutoff, SGT_Source_Filter_Frequency) values (1, 36, 1, 1, 1, "SGT Started", "2020-02-14 12:00:00", "<username>", 0.5, 0.5, 1.0);
sqlite> select Run_ID from CyberShake_Runs; 1
sqlite> .quit
Create Strain Green Tensors
As outlined here, there are 7 jobs we need to run to generate SGTs for our test site.
PreCVM
Details about this stage are available here. This stage may be modified for Iceland, since each site may end up using the same volume. Note that the volume dimensions must be evenly divisible by the number of cores in that dimension.
- Copy over my PreCVM batch script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/precvm.slrm .
- Edit the batch script. Change the sqlite path (--server sqlite:...) to point to your sqlite file instead of mine.
- Submit the job, using sbatch. When complete, make sure there are no errors in precvm.e .
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> cp /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/precvm.slrm .
UCVM
PreCVM must finish before you can run this stage.
Details about this stage are available here. This stage will not be part of the Icelandic processing - you won't use UCVM to create a velocity model - but some other code will provide one.
- Copy over my UCVM batch script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/ucvm.slrm .
- Submit the job. When complete, make sure there are no errors in ucvm.e .This is a parallel job, and may wait in the queue for some time before running.
PreSGT
PreCVM must finish before you can run this stage, but it can run concurrently with UCVM.
Details about this stage are available here.
- Copy over my PreSGT batch script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/presgt.slrm .
- Submit the job. When complete, make sure there are no errors in presgt.e .
PreAWP
UCVM and PreSGT must finish before you can run this stage. Note that we're skipping the Smoothing step, since we're only using a single velocity model.
Details about this stage are available here.
- Copy over my PreAWP batch script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/preawp.slrm .
- Edit the script. Change '--run-id 1' to use your correct run id, which might not be 1.
- Submit the job. When complete, make sure there are no errors in preawp.e .
SGT
PreAWP must finish before you can run this stage.
Details about this stage are available here.
We run two SGTs, one for each horizontal component. They can run concurrently. PreAWP set up the input files needed for both horizontal components. If you'd like to run the vertical also, you'd need to make changes to PreAWP.
- Copy over my SGT x and SGT y batch scripts from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/awp_x.slrm and awp_y.slrm .
- Submit the job. When complete, check awp_x.e and awp_y.e . We usually see lines like:
Note: The following floating-point exceptions are signalling: IEEE_INVALID_FLAG IEEE_UNDERFLOW_FLAG IEEE_DENORMAL
This is OK and is not an error.
This job runs on 1700 cores, so you may need to wait in the queue for a while before it runs.
NaNCheck
We run two of these as well, one for each horizontal component. The SGT for that component must finish before you can run this stage.
Details about this stage are available here.
- Copy over my check batch scripts from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/check_x.slrm and check_y.slrm .
- Submit the jobs. When complete, make sure there are no errors in check_x.e or check_y.e .
PostAWP
We also run two of these. The SGT for the component must finish before you can run this stage, but it can run concurrently with NaNCheck.
Details about this stage are available here.
- Copy over my post batch scripts from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post_x.slrm and post_y.slrm .
- Change the RUN_ID variable to be set to your run ID.
- Submit the jobs. When complete, make sure there are no errors in post_x.e or post_y.e .
Congratulations, you have generated a pair of SGTs!
Post-processing
The post-processing steps are illustrated in this diagram. Since you're not moving the SGTs between systems, we can skip the md5sum check stage, so we only have one stage to run.
Details about DirectSynth, the seismogram synthesis stage, are available here .
This step requires a bit more setup than the previous ones. We must provide a list of all the ruptures which need their seismograms synthesized, and typically I symlink both the rupture geometry files (from which rupture variations are generated), the SGT files, and their headers.
- Create a directory for the post-processing inside your working directory.
- Copy in the rupture file list from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/rupture_file_list_TEST .
- Symlink the *.sgt and *.sgthead files you created earlier into the post-processing directory.
- Create symlinks to all the rupture geometry files. You can use my helper script, /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/make_lns.py.
- Copy in the batch script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/run_ds.slrm .
- Edit run_ds.slrm so RUN_ID uses the correct run id.
- Run the job. This one may also take a while in the queue.
- Check two places for errors. Check both the end of ds.e, and also the end of log.84. At the end of log.84 you should see something like:
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> mkdir post-processing pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST> cd post-processing
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fx_<run id>.sgt pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fx_<run id>.sgthead pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fy_<run id>.sgt pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> ln -s ../TEST_fy_<run id>.sgthead
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing/make_lns.py rupture_file_list_TEST /gpfs/projects/pr1ejg00/CyberShake/ruptures/Ruptures_erf36
Jan 24 20:55:45.870309> Sending complete message to process 83. Jan 24 20:55:45.870318> Sending message to processor 83. Jan 24 20:55:45.870326> Shutting down.
When complete, you will have 395 Seismogram, PeakVals, RotD, and Duration files. File formats are described in detail here.
Data Product generation
Once the seismograms and IM files are generated, the IMs can be put into the database and a variety of data products generated. The jobs involved in the typical CyberShake workflow are illustrated in this diagram.
For this test, we aren't going to run all the jobs, just the Load Amps, the Check, and the Curve Calc for the RotD files.
Load Amps
Details about this stage are available here.
We will point to the SQLite database for database population. I've created a helper script for this purpose.
- Change directories back to your top-level working directory, not the post-processing one.
- Copy over my helper script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/db_insert.sh .
- Edit the script. Change '-server sqlite:...' to point to your training.sqlite file that you used earlier in the training.
- Run the script. It will print output as it goes.
pr1ejg10@login2:/gpfs/scratch/pr1ejg00/pr1ejg10/TEST/post-processing> cd ..
Check DB
Details about this stage are available here. Basically, this task verifies that all the IMs were correctly inserted into the database.
- Copy over my helper script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/check_db.sh .
- Edit the script. You'll need to change both the sqlite file it points to, and also the run id (the -r argument).
- Run the script. It will print output as it goes. Check for errors.
Curve Calc
Details about this stage are available here .
Before we can calculate curves, we need to have a corresponding Hazard_Dataset in the database. This only needs to be done once for a given combination of velocity model, SGT variation, rupture scenario, and ERF.
- Create an appropriate Hazard Dataset. First, get the IDs you'll need from your CyberShake_Runs table, then use it to populate Hazard Datasets.
- Copy over my helper script from /gpfs/scratch/pr1ejg00/pr1ejg10/TEST/curve_calc.sh .
- Edit the script. Replace the first argument, the path to the sqlite file (/gpfs/projects/pr1ejg00/CyberShake/database/test.sqlite in mine) with the path to your database. Also, replace the --run-id with the correct one for your test.
- Run the script. It will print output as it goes.
sqlite> select ERF_ID, Rup_Var_Scenario_ID, SGT_Variation_ID, Velocity_Model_ID from CyberShake_Runs where Run_ID=<run id>; 36|1|1|1 sqlite> insert into Hazard_Datasets (ERF_ID, Rup_Var_Scenario_ID, SGT_Variation_ID, Velocity_Model_ID, Prob_Model_ID, Time_Span_ID, Max_Frequency, Low_Frequency_Cutoff) values (36, 1, 1, 1, 1, 1, 0.5, 0.5)
Congratulations! You have calculated CyberShake hazard curves! You can access them either by looking at the *.pdf and *.png files you've created, or by using sqlite to examine the values in the database directly.
Verification
To confirm that you got the right answers, compare one of your hazard curves with the one I generated.
- Use sqlite to load your database.
- Run a query to get your 3 sec RotD50 curve.
sqlite> select P.X_Value, P.Y_Value from Hazard_Curve_Points P, Hazard_Curves C where C.Run_ID=1 and C.Hazard_Curve_ID=P.Hazard_Curve_ID and C.IM_Type_ID=162;
- Compare your curve to mine. Ideally the values will match exactly, but in general anything with differences of O(0.01%) is fine. For the 3 second RotD50 curve, the reference values (the values I got when I ran the code) are:
X value Y value 0.005 0.00669446347113578 0.007 0.00669446347113578 0.0098 0.00668048983780867 0.0137 0.00665296801989335 0.0192 0.00651297322733557 0.0269 0.00630546899968387 0.0376 0.00576024330858993 0.0527 0.00509651492213958 0.0738 0.00416060395812046 0.103 0.00279691975696716 0.145 0.00166295938324768 0.203 0.000834190556129544 0.284 0.000362477682089857 0.397 0.0001266222803048 0.556 2.71102060581674e-05 0.778 4.75466199389984e-06 1.09 1.86439212512823e-07 1.52 0.0 2.13 0.0
Extending CyberShake
Once you have the basic CyberShake install configured, you can augment it with additional data for a new model or region. Below are details about what needs to be done to include new data.
Earthquake Rupture Forecast
Adding a new earthquake rupture forecast require several different modifications.
Ruptures
First, you should add information for the new ruptures (the fault surfaces). This consists of both database additions and the creation of rupture geometry files.
- Add new entries to the database.
- Create a new ERF in the ERF_IDs table. Provide a name, description, and default probability model (cross-referenced with the ERF_Probability_Models table) and default time span (cross-referenced with the Time_Spans table).
- Optionally, add metadata to the ERF_Metadata table. This can be used to track certain ERF parameters you might want to access individually.
- Populate the Ruptures table. In California CyberShake, we use the UCERF2 conventions for source and rupture. A source is a named fault section, usually with a consistent fault geometry, and a rupture has an assigned probability and magnitude. For each entry in the table, you will need to specify the following:
- Identifying information: source id, rupture id, name. The source id and rupture id are how CyberShake tracks individual ruptures, so the (source id, rupture id) tuples must be unique.
- Magnitude
- Probability. Probabilities of individual events are calculated by dividing the rupture probability by the number of variations for the rupture.
- Information about the rupture geometry. In CyberShake, rupture surfaces are represented by a grid of points. The seismograms are simulated by calculating the contribution from each point as a point source and then combined. Specifically, the database tracks grid spacing (spacing between rupture surface points), number of rows, number of columns, and total points.
- The geographic extent of the rupture is needed so CyberShake can calculate how large the simulation volume must be. Thus, the database stores the 3D coordinates of opposite corners of the rupture surface. It doesn't matter which pair of opposite corners is used, or in what order.
- Create rupture geometry files, one per rupture. These files should follow the format specified here. The values in the headers of these files should match the values found in the database. I recommend using the UCERF2 approach for rectangular rupture surfaces, and the RSQSim one for triangular or irregular meshes.
- Stage the rupture geometry files. The root directory for rupture files is specified as RUPTURE_ROOT in the cybershake.cfg file, in <CyberShake root dir>/software/cybershake.cfg. CyberShake assumes that the rupture files for ERF X are stored in ${RUPTURE_ROOT}/Ruptures_erf<X>/<source id>/<rupture id>/<source id>_<rupture id>.txt , so you will want to copy your rupture geometry files there.
Rupture Variations
CyberShake supports two modes of synthesizing rupture variations:
- Rupture geometries, but no explicit slip time histories, are provided. The rupture geometries are passed through a Graves & Pitarka rupture generator, which is integrated with the DirectSynth code, to generate individual realizations, which are then used to perform the seismogram syntheis.
- Both rupture geometries and slip time histories are provided. The rupture generator is bypassed, and each individual slip time history is used to produce a seismogram.
From our conversations, I believe the plan is to use approach #1, so I will outline the steps involved there in setting up the database.
- If using a new GP generator, create a rupture variation scenario entry in the Rupture_Variation_Scenario_IDs table. This is used to track what version and parameters of the GP rupture generator code are used, since the same ERF could be used with multiple rupture variations. You will need to specify a name and a description.
- If using a new GP generator, add any necessary metadata to the Rupture_Variation_Scenario_Metadata. This is useful for verification and reproducibility.
- Add new entries to the Rupture_Variations table. This table is used by CyberShake to determine which rupture variations go with a given ERF, rupture variation scenario, source, and rupture. Every rupture variation for a given ERF and scenario will have a unique (source id, rupture id, rupture variation id) tuple. You should also include the Rup_Var_LFN, which stands for 'Logical File Name', and should be something unique -- for example, 'e36_rv6_128_1.txt.variation-r000000' for ERF 36, rupture variation scenario id 6, source 128, rupture 1, rupture variation 0. Including the hypocenter locations is optional.
Sites
Adding new sites is straightforward.
- Create a new entry in the CyberShake_Sites table for each site. You'll need to specify the site name, a 5-character or less short name, the latitude and longitude, and an optional site type, which cross-references with the CyberShake_Site_Types table.
- Populate the CyberShake_Site_Ruptures table. This table is used to determine which ruptures should be simulated for a given site. CyberShake internally applies no cutoffs, so any ruptures included in this table will be simulated. For each site, you should add a row for each rupture that should be included when running CyberShake simulations for that site.
- The CyberShake_Site_Regions table is no longer used -- regions are dynamically calculated from the rupture geometries -- so it doesn't need to be populated.
Velocity Models
There are two main approaches to adding new velocity models.
- Register a new model into UCVM. This requires multiple manual steps, and should be done in coordination with the SCEC software group.
- Create a new execution stage which can replace UCVM. For a simple model, this is easier than integrating it into UCVM. Some things to keep in mind:
- The generated mesh should be in AWP format to interface with the SGT code.
- If only one model is being used, the smoothing stage can be skipped.
- If not guaranteed, the mesh should be post-processed to verify that all Vs values are above the minimum (and if adjustments are necessary, that Vp and rho are scaled).
- For all points in the mesh, Vp/Vs >= 1.45 for numerical convergence.
With either approach, be sure to add the new velocity model into the Velocity_Models table.