CSEP Workshop 09-08-2018

From SCECpedia
Revision as of 20:33, 24 September 2018 by Wsavran (talk | contribs)
Jump to navigationJump to search

Workshop goals

  • CSEP needs overhaul
  • Develop concrete plans to start science/software development on CSEP2
  • Driving from new science needs and new models
  • Plans for coordinating developer resources


Workshop Outcomes

  • Science priorities and how software supports this
  • New experiments:
    • Testing USGS OEF models
    • Forecasts of finite-ruptures
    • Want to develop documents for select new experiments for CSEP2 experiments to drive CSEP2 software developments
  • Prototype new experiments!


General Thoughts [Vidale]

  • Progress on Basic Science
  • Concrete demonstration of progress


USGS Perspectives on CSEP2.0

  • Rigorous testing is an essential aspect of developing and deploying all manner of earthquake forecast products
  • High priorities:
    • UCERF3
    • USGS National Seismic Hazard model
    • Operational Aftershock Earthquake Forecasting
    • Improved after shock forecasting
  • CSEP impacts on USGS Hazard assessments
    • Helmstetter’s adaptive smoothing in UCERF3
    • S-test for smoothing parameters
  • USGS Aftershock forecasting
    • Expanding models to expand epistemic uncertainty
    • Developing pathway for new methods
    • Improving reliability and skill
    • New Models must be approved by NEPEC
  • Different models give different outputs
    • Users want spatial forecasts and hazard
  • ETAS vs. R&J
    • ETAS helps to capture variability of the real world. Advantage over R&J
  • Testing to hone models
    • R&J home parameters and choices (sequence specific, Bayesian)
    • Fast ETAS -- applying smoothing to spatial component
  • Testing for societal acceptance (how can we get society to accept the results)
  • Testing simulated catalog forecasts
    • Projects of event sets to form various PDFs, interevent time
    • Could be mapped to ground-motion space
  • Retrospective testing is very important
  • Internal or External or Independent
  • USGS Models:
    • NSHM
    • 1-year induced seismicity forecasts
    • RJ Operational Earthquake Forecasts
    • FAST-Etas
    • UCERF3


CSEP2 development strategy needs to be coordinated between international community

    • Modularize toolbox
    • Unit tests for all CSEP primitives
    • Acceptance tests for all CSEP
    • Documentation is thorough


R&J Forecasts at USGS

  • Linked to website at the USGS in beta, currently planned to launch in October
  • Workflow:

1. Compute R&J using GUI tool 2. Forecast sent from GUI to NEIC 3. NEIC uses forecast to populate pre-written web template 4. Time to solution goal is ~2 hours

  • Working towards automating this process and including more models
  • Why R&J?
    • Decades of use in California
    • Approved by NEPEC
  • Testing Goals
    • Transparency: demonstrate that we are providing to the public is being evaluated
    • Baseline: they can be compared to R&J to measure improvement
    • Testing the tests
  • Testing challenges:
    • Sequence based, rather than grid-based
    • EQ probability not necessarily poissonian
    • Updating in real-time, overlapping windows
    • Spatial forecasts not aligned on grid, and not aligned in time
    • Modeling epistemic and aleatory
    • Requires looking at simulation based tests
    • Overlapping, non-independent forecasts
    • External forecasts
      • Test what the public is actually seeing
    • Run within CSEP
      • Difficult to duplicate
    • Value in publicly testing forecasts within CSEP
  • Testing requirements:
    • Non-grid-based forecasts
    • Simulations
    • External forecasts

Experiments for FastETAS

  • FastETAS will be implemented in USGS system
  • GUI based system that takes ComCat data and produces quick summaries
  • Close to ‘vanilla’ ETAS as possible
  • Tweak to regular ETAS
    • Masking applied to prevent super-critical range
    • Independent mainshock productivity
    • Mc-dependent productivity for Generic model
    • Time dependent Mc
    • Spatial forecast: physics-motivated dressed kernel
  • Can transition from spatial-rate to time-dependent spatial hazard
  • Evaluating forecasts
    • Information gain? Not that useful.
    • Things to test:
      • Misfit, over/under-predictions, surprise rate
      • Reliability horizon
      • Spatial performance
      • Shaking forecast
      • Shaking forecast
      • Generic vs. bayesian
      • Value of additional parameters
      • Other tweaks

Testing UCERF3

  • Requirements:
    • Non-GR MFS
    • Bulge of ~Mw 6.7 earthquakes
    • Fault participation
    • MFD near faults has 1st order impact on conditional triggering probabilities
    • Elastic rebound needed
    • U3ETAS is simulation based
  • Questions
    • Are near-fault MFDs non-GR?
    • Are conditional triggering probabilities really larger near faults?
    • Is elastic-rebound needed?
    • Dealing with epistemic uncertainties?
    • Bulge in regional MFDs?
    • Testing simulation-based forecasts?
    • Is U3ETAS really more useful?
  • Looking forward
    • Implement CSEP1 style tests for simulation-based forecasts
    • Apply retrospectively to U3ETAS, U3NoFaultETAS, Fast-ETAS
    • Apply above tests prospectively
    • Implement “turning-style” tests
    • Test fault or cell participation
    • Epistemic Uncertainty
    • Test relative model usefulness

Testing Stochastic Event Sets

  • Advantages: greater flexibility including tests incorporating space-time correlations.
  • Challenges: May still need gridding of the synthetic catalogs for some applications.
  • Consistency tests of simulation-based models
    • A wide range of consistency tests is possible
    • Example: Clustering
      • Compare a statistic computed from the real catalog with distribution of same statistic from catalog
      • Each time-period results in a p-value; combined distribution can be compared to uniform distribution
    • Example: For a distribution
      • Use K-S statistics to compare the synthetic distribution from catalog distribution
  • Simulation statistics to do similar job to existing CSEP1 consistency tests
    • Mimic N-test, M-test, S-test
    • Inter-event time distribution
    • Inter-event distance distribution
    • Total earthquake rate distribution
    • Information gain important
  • Synthetic catalog updating issues
    • Length of simulated catalogs
    • Simulate for the longest period
    • Update intervals
  • Key decisions
    • Data management the most important part of system
    • Updating periods and magnitudes
    • How much effort should CSEP put into testing models of small-earthquake occurrence?
    • Scientific vs operational forecasting requirements

Future of the NZ Testing Center

  • Uncertain future
  • Difficulty maintain the last 10 years; both workloads and lack of appropriate skills
  • Significant GeoNet catalogue data problems
  • How do we best align with other international CSEP work? Don’t want to split efforts within CSEP
  • CSEP Culture
    • Perception of relevance to those outside of CSEP need to expand CSEP community
  • Need more tests targeting the end-users of the models
  • Better communication of the results. How can we include others?
  • Take forecasts into hazard and loss space.
  • Hybridisation module
    • Module to create hybrid forecasts from current CSEP models
  • Aim to broaden involvement of science community in CSEP activities.


Perspectives from CSEP Japan

  • Testing different model classes
  • Looking at finite-fault ETAS model
  • Hypocentral ETAS model (3D volume)
  • Combine 3D+finite-fault model
  • ETAS+Focal mechanism


Perspectives from China

  • CSEP Software was installed in the institute of geophysics
  • Definitions for CSEP Experiments
  • Software did not work, so needed to create their own homebrew version of the software.
  • CN-CSEP needs international collaboration
  • New Forecasting region for CSEP2.0


Perspectives from Italy

  • Poisson assumption in space and time does not hold
  • Models need to include epistemic uncertainty
  • Discretization on a grid, might need to stay to engage with many different models.
  • Should make changes that affect CSEP 1.0 the least
  • Need to move toward synthetic catalogs
  • Synthetic catalogs can be used to generate test distribution to replace the Poisson distribution