CISM Evaluation Metrics

From SCECpedia
Revision as of 16:55, 13 April 2016 by Kmilner (talk | contribs) (Created page with "List of metrics/regression tests for all catalogs as a way to compare them, ensure that RSQSim is behaving the same on different machines, and to compare/validate results from...")
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigationJump to search

List of metrics/regression tests for all catalogs as a way to compare them, ensure that RSQSim is behaving the same on different machines, and to compare/validate results from updated versions of RSQSim:

  • Magnitude-Frequency distribution
    • Whole-system
    • Region- and fault-specific (for more complex models)

• Inter-event time and distance distributions • Scaling relations o Magnitude-area o Slip-area o Stress drop - magnitude • Rupture speed • Recurrence time distributions for event sets of interest

 Most of these are already in plotEqs(), one of the RSQSim post-processing functions that Keith includes with RSQSim, but we probably want to do regional tests that focus on particular sites or faults as well.  We should create a new function or package of them, similar to plotEqs() that does all the tests and reports/plots the results.


Additional tests and ways of comparing RSQSim catalogs, to both simulated and real data:

 We also want some measure of the divergence of these catalogs when small changes are made to the parameters, as well as the number of cores the simulations are run on. o Lyapunov exponent  Set up a reference rupture set or catalog.  Compare regional RSQSim MFD’s to UCERF3’s on-fault MFD’s.