CSEP Minutes 06-20-2018
From SCECpedia
Jump to navigationJump to search
are step and stepjava the same?
- no the step model has a bug in it
- stepjava was the fix of that
- removed matlab dependency
do we need to reprocess the original STEP model?
- spatial tests would likely to fail, but temporal tests could be interesting
- buggy territory
- users that are most interested in the original model would be NZ or USGS
- step and stepjava not consistent in the two models
- reprocessing original step model not a priority
alarm based forecasts are not a high-priority, but someone went through the trouble. we should aim to include them, but lower priority. would be nice for curation and to put them out there, but won't make it into the first paper.
which evaluations are not important?
- R test can be turned off
- max's priorities (listed in order):
- first:
- N, L, CL, M, S (classic RELM tests)
- T, W (cumulative)
- second:
- LW, RP, RD, RT, RTT (residual based tests)
- first:
- TX, WX comparisons applied across forecast groups for bayesian tests
- NSIM compares the observed number compared to the simulated number, can be applied to UCERF3-ETAS etc.
- 16.4 and 16.7 are experimental versions of what we envision of CSEP2, and are lower priority
paper ideas:
- comparisons are ETASV1.1 better than ETAS?
- are new models better than old models?
- should we include smaller earthquakes?
- are hybrid models better than their components?
- does GSF_ANISO perform GSF_ISO?
- observed sequences
- el mayor
- swarms around salton trough
update from leadership retreat workshop plans:
- CSEP workshop was funded for 1 day models.
- Saturday before the SCEC annual meeting
- Look at tectonic differences in the predictive skills in these models
- requires recomputing information gains across tectonic regions
- joint paleo-data/CISM workshop difficult to hold at the annual meeting and we must find another time: join workshop with planned UCERF workshop being planning for Oct/Nov
reprocessing comments
- need to cross verify evalutions with our own codes.
- understanding where the gaps exists and from when (currently working)
more dicussion csep2.0:
- design principles are independent prospective testing
- usgs centric and impactful, but universally successful platform
- how do we view ourselves as a service provider for the USGS?
- is there value in csep to run tests?
- tests can be run on external forecasts, so cseps role is to advise which tests should be ran
- does this fit within the guidelines?
- earthquake fault association problem useful for a range of models
- computationally need to make regarding the design philosophy
- turn off the daily grind of the csep system
- would require some new developments of the CSEP system
- do we envision running fully automated tests in CSEP2?
- catalog versioning
- implementation should be such that others could be run offline
- added predictive skill that faults bring to the ETAS model. U3ETASNoFault vs U3ETASWithFault
- need more thorough process for the decision making process; set of more fleshed out proposals
- co-developed by ourselves and specific modelers
- request to ned and kevin to develop mini-proposals for consideration within CSEP
- simulation-based forecasts epicenters
- users care about hazard: not epicenters
- faults are important for understanding hazard
- want to avoid focusing on expensive model that users don't care about
- need focused discussion between csep group and usgs
- csep call with andy, nick. v, morgan, ned, kevin to dicuss specifics during offweek
- send out poll of dates for this call, draft email to get buy-in from modelers
- scientific questions, test statistics, data sets, retrospective tests/experiments
- draft email and send to max
next group call: 06.27