Difference between revisions of "CSEP2 Community Responses"

From SCECpedia
Jump to navigationJump to search
 
(26 intermediate revisions by the same user not shown)
Line 2: Line 2:
  
 
__TOC__  
 
__TOC__  
 +
 
= Questionnaire =
 
= Questionnaire =
Questionnaire for CSEP Modelers
+
 
 +
* What are the scientific questions to be answered by experiments involving your model?
 +
 
 +
* Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
 +
 
 +
* Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
 +
 
 +
* Required data inputs to compute forecast?
 +
 
 +
*  Authoritative data source for testing forecasts?
 +
 
 +
* Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
 +
 
 +
* Specific ideas for Retrospective/Prospective testing? Timescales?
 +
 
 +
* Community involvement? competing models? New models/extant CSEP models?
 +
 
 +
= Responses =
 +
These responses will act as living documents that we can refine as time progresses.
 +
 
 +
== UCERF3-ETAS ==
 +
Submitted by: Ned Field
  
 
; What are the scientific questions to be answered by experiments involving your model?
 
; What are the scientific questions to be answered by experiments involving your model?
  
; Model software requirements: Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
+
: Ideally, whether elastic rebound is really needed when adding spatiotemporal clustering to fault-based models, whether ETAS is an adequate statistical proxy for large-event clustering, and whether large triggered events can nucleate well within the rupture area of the triggering event (or only around the edges of the latter).
 +
 
 +
: The practical question is whether including faults is value added.
 +
 
 +
; Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
 +
 
 +
: U3ETAS presently requires high performance computing.  Each simulation (synthetic catalog) takes from 1 to 20 minutes to generate, and we need some number of these to make robust statistical inferences (with the actual number depending on the metric of interest) . Results published to date have utilized about 10,000 simulations.  Kevin is configuring things so that anyone can run a set of simulations (so this could be done either internal or external to CSEP).  Nothing will be automated anytime soon.
 +
 
 +
; Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
  
; Object of forecast: Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
+
: Results are some number of simulated catalogs, with finite fault surfaces for larger events.  The start time and duration are flexible.
  
 
; Required data inputs to compute forecast?
 
; Required data inputs to compute forecast?
 +
 +
: All potentially influential M≥2.5 events before the start time, including finite surfaces for larger Qks (getting the latter in real time is still to be dealt with).
 +
 +
;  Authoritative data source for testing forecasts?
 +
 +
: With respect to the M≥2.5 events and finite rupture surfaces, COMCAT?  Real-time catalog completeness may still be an issue.
 +
 +
; Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
 +
 +
: We can generate pretty much anything that can be computed for real earthquakes, including the Page and van der Elst Turing tests (Turing‐style tests for UCERF3 synthetic catalogs BSSA 108 (2), 729-741).
 +
 +
; Specific ideas for Retrospective/Prospective testing? Timescales?
 +
 +
: Perhaps we should start with time periods following large historic events in CA (e.g., Northridge, Landers, etc.).  We might want to test quiet periods as well?  The bid task is being able to deal with simulation-based forecasts.
 +
 +
; Community involvement? competing models? New models/extant CSEP models?
 +
 +
Keep in mind that we also have a no-faults version of U3ETAS, and there are all kinds of improvements that could be made to these models (e.g., aleatory variability in productivity parameters).
 +
 +
Here is our present to-do list with respect to U3ETAS:
 +
 +
1) train others to run models on HPC and generate plots
 +
 +
2) fetch recent M≥2.5 seismicity data from comcat and stitch together with the U3 catalog (which ends around 2014).
 +
 +
3) associate any large CA events to U3 fault sections (probabilistically because there will never be a perfect fit).
 +
 +
4) deal with catalog incompleteness?
 +
 +
 +
In terms of testing usefulness, here is what I’d like to do at some point:
 +
 +
We know that the rate of little events varies significantly with time, and that the probability of big events correlates with the rate of little events.  But there are uncertainties in what will happen looking forward, so the question is whether our forecasts provide useful information given these uncertainties.  Take any metric we are interested in (e.g., rate of M≥2.5 events or statewide financial losses); how do actual fluctuations compare with, say, the 95% confidence bounds from forecasts..
 +
Starting from whenever the M≥2.5 catalog is complete, I’d like to make forecasts at increments in time moving forward (monthly or yearly, or at some trigger points such as large event or before particularly quiet periods), and the plot the 95% confidence bounds (or whatever) of the forecast against what actually happens.  We can also project this analysis into the future by randomly choosing one the forecasts as what “actually” happens.  Again, the point is to tests whether our forecasts have value given the aleatory uncertainties looking forward.  Of course the answer will depend on the hazard or risk metric one care's about (so we will need to choose some to test), and the answer will also presumably vary depending on what’s happened recently (e.g., our forecast will presumably have some value after large events such as HayWired, but what about following particularly quiet times? Could CEA lower reinsurance levels during the latter?)  This analysis will require significant HPC resources.
 +
 +
== USGS Aftershock Forecasts (R&J and ETAS) ==
 +
Submitted by: Jeanne Hardebeck
 +
 +
Response for the planned USGS routine aftershock forecasting.  We will be rolling out Reasenberg & Jones forecasts in late August 2018, with ETAS forecasts to follow.  Forecasts will be for aftershocks following M>=5 earthquakes, and smaller events of interest, within the US.
 +
 +
; What are the scientific questions to be answered by experiments involving your model?
 +
 +
: These forecasts are meant to inform the public and decision-makers, not to address any scientific questions.  As we evolve from Reasenberg & Jones to ETAS, we will be able to tests these two models against each other.
 +
 +
; Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
 +
 +
: Forecasts will be computed externally to CSEP.  Currently on-demand, but in the process of automating.
 +
 +
; Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
 +
 +
: Forecast is cast as a PDF of the number of event hypocenters within a spatial region, time period, and magnitude range.  There is no expectation that this PDF is any particular kind of distribution.
 +
 +
: The ETAS forecasts will include something similar to a set of simulated catalogs, but not exactly.  The forecasts are based on temporal simulated event sets, while a static spatial kernel is used to spatially distribute the event rate.  So there are temporal simulated catalogs.  Spatial-temporal simulated catalogs could be created using the spatial kernel, but wouldn't have the full level of spatial correlations of a true ETAS simulation.
 +
 +
: Forecast horizons will range from 1 day to 1 year, and updating will occur frequently.  Therefore, many forecasts will overlap, making them non-independent.
 +
 +
; Required data inputs to compute forecast?
 +
 +
: Forecasts will be computed externally to CSEP.
  
 
; Authoritative data source for testing forecasts?
 
; Authoritative data source for testing forecasts?
 +
 +
: ComCat.
 +
 +
; Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
 +
 +
: ??
 +
 +
; Specific ideas for Retrospective/Prospective testing? Timescales?
 +
 +
: ??
 +
 +
; Community involvement? competing models? New models/extant CSEP models?
 +
 +
: ??
 +
 +
== New Zealand Forecasts (STEPJAVA) ==
 +
Submitted by: Matt Gerstenberger
 +
 +
; What are the scientific questions to be answered by experiments involving your model?
 +
: Should we continue exploring this model?
 +
: Does the model provide additional information over any other model?
 +
: How much is the variability in performance across magnitude, space and time?
 +
: How does the model perform in low seis vs high seis regions?
 +
 +
; Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
 +
 +
: internal; no idea; either; no; operating in two csep testing centres, with similar in a third
 +
 +
; Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
 +
 +
: hypocenters or epicenters; rates/probs; any forecast horizon or update
 +
 +
; Required data inputs to compute forecast?
 +
 +
: standard eq catalogue: time, lat/lon/depth, mag
 +
 +
;  Authoritative data source for testing forecasts?
 +
 +
: standard csep
  
 
; Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?  
 
; Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?  
 +
 +
: standard csep
  
 
; Specific ideas for Retrospective/Prospective testing? Timescales?
 
; Specific ideas for Retrospective/Prospective testing? Timescales?
  
; Community involvement: competing models? New models/extant CSEP models?
+
: understanding spatial variability across models - both within "sequence" and across regions. weeks to multiple years/decades
 +
: exploring retrospective testing to understand variability in performace
  
= Responses =
+
; Community involvement? competing models? New models/extant CSEP models?
These responses will act as living documents that we can refine as time progresses.
 
  
== USGS Aftershock Forecasts ==
+
: all other short, medium and long-term models.
  
Response for the planned USGS routine aftershock forecasting.  We will
+
== Global Forecasts (GEAR1) ==
be rolling out Reasenberg & Jones forecasts in late August 2018, with
+
Submitted by: Dave Jackson
ETAS forecasts to follow.  Forecasts will be for aftershocks following M>=5
 
earthquakes, and smaller events of interest, within the US.
 
  
 
; What are the scientific questions to be answered by experiments involving your model?
 
; What are the scientific questions to be answered by experiments involving your model?
 +
: Are larger earthquakes (m7.5+) occurring in proportion to the rate of smaller ones (m5.8+), which would be revealed by constructing the model as in the past, but testing only for m7.5+)?
 +
 +
: Does surface strain rate represent the occurrence of larger earthquakes before the beginning of the catalog used to construct the forecast. If so, then we expect that the success of the forecast would improve if the weight given to surface strain rate is increased for longer test periods?
 +
 +
: Can the magnitude distributions of earthquakes over magnitude 5.8 be described in terms of uniform values on 5 tectonic region types, as assumed in the model, or do corner magnitude and b-value differ significantly within each tectonic type? For example, do all subduction zones have the same magnitude and b-value, or are there significant differences. If the latter, what controls the size distribution?
 +
 +
; Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
 +
 +
: Each specific implementation of the model is supplied in the form of a matrix, requiring no computation during the test period. Different implementations include different weights given to smoothed seismicity and surface strain rate for specific test periods. The method of model formulation is published.
 +
 +
; Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
 +
 +
: Epicenters, cast as yearly rates for several magnitude thresholds from 5.8 through 9.0.
  
These forecasts are meant to inform the public and decision-makers,
+
; Required data inputs to compute forecast?
not to address any scientific questions.  As we evolve from Reasenberg
 
& Jones to ETAS, we will be able to tests these two models against
 
each other.
 
  
; Model software requirements: Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
+
: None; computations is done by us before beginning of each test period.
  
: Forecasts will be computed externally to CSEP. Currently on-demand,
+
; Authoritative data source for testing forecasts?
but in the process of automating.
 
  
; Object of forecast: Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
+
: GSMC (Global Seismic Moment Catalog) for moment magnitude
  
Forecast is cast as a PDF of the number of event hypocenters within a spatial
+
: COMCAT for location and depth; only earthquakes with depth <= 70 km included in test
region, time period, and magnitude range.  There is no expectation that this
 
PDF is any particular kind of distribution.
 
  
The ETAS forecasts will include something similar to a set of
+
; Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
simulated catalogs,
 
but not exactly. The forecasts are based on temporal simulated event
 
sets, while
 
a static spatial kernel is used to spatially distribute the event
 
rate.  So there are
 
temporal simulated catalogs.  Spatial-temporal simulated catalogs
 
could be created
 
using the spatial kernel, but wouldn't have the full level of spatial
 
correlations of a
 
true ETAS simulation.
 
  
Forecast horizons will range from 1 day to 1 year, and updating will
+
: Appropriate tests are N, M, S, T, Conditional S test, Conditional L test, Conditional R test vs. competing models as appropriate
occur frequently.  Therefore, many forecasts will overlap, making them
 
non-independent.
 
  
* Required data inputs to compute forecast?
+
; Specific ideas for Retrospective/Prospective testing? Timescales?
 +
 
 +
: Basic rate model is quite stable so it can be used for any future testing interval. We envision specific tests on 1, 5, 10, and 30 year intervals, and will prepare forecast models for each choice using specific weights between smoothed seismicity and surface strain rate. The 1-year model can be used prospectively in successive years without adjustment, and same for 5, 10, and 30 years, unless we request otherwise before the beginning of any given test interval.
 +
 
 +
; Community involvement? competing models? New models/extant CSEP models?
 +
 
 +
: We invite competing models specified on the same global grid (0.1 degree spacing, with grid points at 0.05 deg etc.. We would love to compare ours with models assuming seismic gaps, stress-dependent or otherwise variable b-values and corner magnitudes.
  
Forecasts will be computed externally to CSEP.
+
: We expect that our model would perform well on regional tests as well by simply excluding points outside a given region in a test, so we welcome tests on long-term forecasts over regions like north America, China, Japan
  
* Authoritative data source for testing forecasts?
+
== General Responses ==
 +
Submitted by: Warner Marzocchi
  
ComCat.
+
; What are the scientific questions to be answered by experiments involving your model?
 +
: I think that the most important issue has to remain the forecast. This is mostly what society asks us. Of course, we may also implement tests for specific scientific hypothesis but we have to be aware than society needs forecasts and hazard.  
  
* Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
+
; Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
 +
: I think it would be good if any model will provide simulated catalogs from which we may calculate any statistics we want. This allows us to overcome many shortcomings of the current tests, such as the independence, the Poisson assumption and including epistemic uncertainty.
 +
: I would be flexible about where the model is stored. I prefer inside CSEP but I would allow external models to be part of the game if this allows us to increment the number of CSEP people.
  
??
+
; Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
 +
: Simulated catalogs
  
* Specific ideas for Retrospective/Prospective testing? Timescales?
+
; Required data inputs to compute forecast?
 +
:  Seismic catalog
  
??
+
; Authoritative data source for testing forecasts?
 +
: The same as in CSEP1
  
* Community involvement: competing models? New models/extant CSEP models?
+
; Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
 +
: I would implement more scoring systems as the weather forecasting community is doing.
 +
 
 +
; Specific ideas for Retrospective/Prospective testing? Timescales?
 +
: The short-term is important
  
??
+
; Community involvement? competing models? New models/extant CSEP models?
 +
: I would definitely include some procedures to create ensemble models using the scores.

Latest revision as of 22:19, 1 August 2018

CSEP Working Group Home Page

Questionnaire

  • What are the scientific questions to be answered by experiments involving your model?
  • Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
  • Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
  • Required data inputs to compute forecast?
  • Authoritative data source for testing forecasts?
  • Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
  • Specific ideas for Retrospective/Prospective testing? Timescales?
  • Community involvement? competing models? New models/extant CSEP models?

Responses

These responses will act as living documents that we can refine as time progresses.

UCERF3-ETAS

Submitted by: Ned Field

What are the scientific questions to be answered by experiments involving your model?
Ideally, whether elastic rebound is really needed when adding spatiotemporal clustering to fault-based models, whether ETAS is an adequate statistical proxy for large-event clustering, and whether large triggered events can nucleate well within the rupture area of the triggering event (or only around the edges of the latter).
The practical question is whether including faults is value added.
Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
U3ETAS presently requires high performance computing. Each simulation (synthetic catalog) takes from 1 to 20 minutes to generate, and we need some number of these to make robust statistical inferences (with the actual number depending on the metric of interest) . Results published to date have utilized about 10,000 simulations. Kevin is configuring things so that anyone can run a set of simulations (so this could be done either internal or external to CSEP). Nothing will be automated anytime soon.
Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
Results are some number of simulated catalogs, with finite fault surfaces for larger events. The start time and duration are flexible.
Required data inputs to compute forecast?
All potentially influential M≥2.5 events before the start time, including finite surfaces for larger Qks (getting the latter in real time is still to be dealt with).
Authoritative data source for testing forecasts?
With respect to the M≥2.5 events and finite rupture surfaces, COMCAT? Real-time catalog completeness may still be an issue.
Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
We can generate pretty much anything that can be computed for real earthquakes, including the Page and van der Elst Turing tests (Turing‐style tests for UCERF3 synthetic catalogs BSSA 108 (2), 729-741).
Specific ideas for Retrospective/Prospective testing? Timescales?
Perhaps we should start with time periods following large historic events in CA (e.g., Northridge, Landers, etc.). We might want to test quiet periods as well? The bid task is being able to deal with simulation-based forecasts.
Community involvement? competing models? New models/extant CSEP models?

Keep in mind that we also have a no-faults version of U3ETAS, and there are all kinds of improvements that could be made to these models (e.g., aleatory variability in productivity parameters).

Here is our present to-do list with respect to U3ETAS:

1) train others to run models on HPC and generate plots

2) fetch recent M≥2.5 seismicity data from comcat and stitch together with the U3 catalog (which ends around 2014).

3) associate any large CA events to U3 fault sections (probabilistically because there will never be a perfect fit).

4) deal with catalog incompleteness?


In terms of testing usefulness, here is what I’d like to do at some point:

We know that the rate of little events varies significantly with time, and that the probability of big events correlates with the rate of little events. But there are uncertainties in what will happen looking forward, so the question is whether our forecasts provide useful information given these uncertainties. Take any metric we are interested in (e.g., rate of M≥2.5 events or statewide financial losses); how do actual fluctuations compare with, say, the 95% confidence bounds from forecasts.. Starting from whenever the M≥2.5 catalog is complete, I’d like to make forecasts at increments in time moving forward (monthly or yearly, or at some trigger points such as large event or before particularly quiet periods), and the plot the 95% confidence bounds (or whatever) of the forecast against what actually happens. We can also project this analysis into the future by randomly choosing one the forecasts as what “actually” happens. Again, the point is to tests whether our forecasts have value given the aleatory uncertainties looking forward. Of course the answer will depend on the hazard or risk metric one care's about (so we will need to choose some to test), and the answer will also presumably vary depending on what’s happened recently (e.g., our forecast will presumably have some value after large events such as HayWired, but what about following particularly quiet times? Could CEA lower reinsurance levels during the latter?) This analysis will require significant HPC resources.

USGS Aftershock Forecasts (R&J and ETAS)

Submitted by: Jeanne Hardebeck

Response for the planned USGS routine aftershock forecasting. We will be rolling out Reasenberg & Jones forecasts in late August 2018, with ETAS forecasts to follow. Forecasts will be for aftershocks following M>=5 earthquakes, and smaller events of interest, within the US.

What are the scientific questions to be answered by experiments involving your model?
These forecasts are meant to inform the public and decision-makers, not to address any scientific questions. As we evolve from Reasenberg & Jones to ETAS, we will be able to tests these two models against each other.
Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
Forecasts will be computed externally to CSEP. Currently on-demand, but in the process of automating.
Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
Forecast is cast as a PDF of the number of event hypocenters within a spatial region, time period, and magnitude range. There is no expectation that this PDF is any particular kind of distribution.
The ETAS forecasts will include something similar to a set of simulated catalogs, but not exactly. The forecasts are based on temporal simulated event sets, while a static spatial kernel is used to spatially distribute the event rate. So there are temporal simulated catalogs. Spatial-temporal simulated catalogs could be created using the spatial kernel, but wouldn't have the full level of spatial correlations of a true ETAS simulation.
Forecast horizons will range from 1 day to 1 year, and updating will occur frequently. Therefore, many forecasts will overlap, making them non-independent.
Required data inputs to compute forecast?
Forecasts will be computed externally to CSEP.
Authoritative data source for testing forecasts?
ComCat.
Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
 ??
Specific ideas for Retrospective/Prospective testing? Timescales?
 ??
Community involvement? competing models? New models/extant CSEP models?
 ??

New Zealand Forecasts (STEPJAVA)

Submitted by: Matt Gerstenberger

What are the scientific questions to be answered by experiments involving your model?
Should we continue exploring this model?
Does the model provide additional information over any other model?
How much is the variability in performance across magnitude, space and time?
How does the model perform in low seis vs high seis regions?
Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
internal; no idea; either; no; operating in two csep testing centres, with similar in a third
Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
hypocenters or epicenters; rates/probs; any forecast horizon or update
Required data inputs to compute forecast?
standard eq catalogue: time, lat/lon/depth, mag
Authoritative data source for testing forecasts?
standard csep
Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
standard csep
Specific ideas for Retrospective/Prospective testing? Timescales?
understanding spatial variability across models - both within "sequence" and across regions. weeks to multiple years/decades
exploring retrospective testing to understand variability in performace
Community involvement? competing models? New models/extant CSEP models?
all other short, medium and long-term models.

Global Forecasts (GEAR1)

Submitted by: Dave Jackson

What are the scientific questions to be answered by experiments involving your model?
Are larger earthquakes (m7.5+) occurring in proportion to the rate of smaller ones (m5.8+), which would be revealed by constructing the model as in the past, but testing only for m7.5+)?
Does surface strain rate represent the occurrence of larger earthquakes before the beginning of the catalog used to construct the forecast. If so, then we expect that the success of the forecast would improve if the weight given to surface strain rate is increased for longer test periods?
Can the magnitude distributions of earthquakes over magnitude 5.8 be described in terms of uniform values on 5 tectonic region types, as assumed in the model, or do corner magnitude and b-value differ significantly within each tectonic type? For example, do all subduction zones have the same magnitude and b-value, or are there significant differences. If the latter, what controls the size distribution?
Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
Each specific implementation of the model is supplied in the form of a matrix, requiring no computation during the test period. Different implementations include different weights given to smoothed seismicity and surface strain rate for specific test periods. The method of model formulation is published.
Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
Epicenters, cast as yearly rates for several magnitude thresholds from 5.8 through 9.0.
Required data inputs to compute forecast?
None; computations is done by us before beginning of each test period.
Authoritative data source for testing forecasts?
GSMC (Global Seismic Moment Catalog) for moment magnitude
COMCAT for location and depth; only earthquakes with depth <= 70 km included in test
Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
Appropriate tests are N, M, S, T, Conditional S test, Conditional L test, Conditional R test vs. competing models as appropriate
Specific ideas for Retrospective/Prospective testing? Timescales?
Basic rate model is quite stable so it can be used for any future testing interval. We envision specific tests on 1, 5, 10, and 30 year intervals, and will prepare forecast models for each choice using specific weights between smoothed seismicity and surface strain rate. The 1-year model can be used prospectively in successive years without adjustment, and same for 5, 10, and 30 years, unless we request otherwise before the beginning of any given test interval.
Community involvement? competing models? New models/extant CSEP models?
We invite competing models specified on the same global grid (0.1 degree spacing, with grid points at 0.05 deg etc.. We would love to compare ours with models assuming seismic gaps, stress-dependent or otherwise variable b-values and corner magnitudes.
We expect that our model would perform well on regional tests as well by simply excluding points outside a given region in a test, so we welcome tests on long-term forecasts over regions like north America, China, Japan

General Responses

Submitted by: Warner Marzocchi

What are the scientific questions to be answered by experiments involving your model?
I think that the most important issue has to remain the forecast. This is mostly what society asks us. Of course, we may also implement tests for specific scientific hypothesis but we have to be aware than society needs forecasts and hazard.
Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
I think it would be good if any model will provide simulated catalogs from which we may calculate any statistics we want. This allows us to overcome many shortcomings of the current tests, such as the independence, the Poisson assumption and including epistemic uncertainty.
I would be flexible about where the model is stored. I prefer inside CSEP but I would allow external models to be part of the game if this allows us to increment the number of CSEP people.
Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
Simulated catalogs
Required data inputs to compute forecast?
Seismic catalog
Authoritative data source for testing forecasts?
The same as in CSEP1
Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
I would implement more scoring systems as the weather forecasting community is doing.
Specific ideas for Retrospective/Prospective testing? Timescales?
The short-term is important
Community involvement? competing models? New models/extant CSEP models?
I would definitely include some procedures to create ensemble models using the scores.