Difference between revisions of "CSEP2 Community Responses"
From SCECpedia
Jump to navigationJump to search (→UCERF3) |
|||
Line 62: | Line 62: | ||
: ?? | : ?? | ||
− | ; Community involvement | + | ; Community involvement? competing models? New models/extant CSEP models? |
: ?? | : ?? |
Revision as of 23:57, 26 July 2018
Contents
Questionnaire
- What are the scientific questions to be answered by experiments involving your model?
- Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
- Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
- Required data inputs to compute forecast?
- Authoritative data source for testing forecasts?
- Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
- Specific ideas for Retrospective/Prospective testing? Timescales?
- Community involvement? competing models? New models/extant CSEP models?
Responses
These responses will act as living documents that we can refine as time progresses.
USGS Aftershock Forecasts (R&J and ETAS)
Submitted by: Jeanne Hardebeck
Response for the planned USGS routine aftershock forecasting. We will be rolling out Reasenberg & Jones forecasts in late August 2018, with ETAS forecasts to follow. Forecasts will be for aftershocks following M>=5 earthquakes, and smaller events of interest, within the US.
- What are the scientific questions to be answered by experiments involving your model?
- These forecasts are meant to inform the public and decision-makers, not to address any scientific questions. As we evolve from Reasenberg & Jones to ETAS, we will be able to tests these two models against each other.
- Model software requirements
- Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
- Forecasts will be computed externally to CSEP. Currently on-demand, but in the process of automating.
- Object of forecast
- Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
- Forecast is cast as a PDF of the number of event hypocenters within a spatial
- region, time period, and magnitude range. There is no expectation that this
- PDF is any particular kind of distribution.
- The ETAS forecasts will include something similar to a set of simulated catalogs, but not exactly. The forecasts are based on temporal simulated event sets, while a static spatial kernel is used to spatially distribute the event rate. So there are temporal simulated catalogs. Spatial-temporal simulated catalogs could be created using the spatial kernel, but wouldn't have the full level of spatial correlations of a true ETAS simulation.
- Forecast horizons will range from 1 day to 1 year, and updating will occur frequently. Therefore, many forecasts will overlap, making them non-independent.
- Required data inputs to compute forecast?
- Forecasts will be computed externally to CSEP.
- Authoritative data source for testing forecasts?
- ComCat.
- Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
- ??
- Specific ideas for Retrospective/Prospective testing? Timescales?
- ??
- Community involvement? competing models? New models/extant CSEP models?
- ??
UCERF3
Awaiting response.
New Zealand Forecasts (STEPJAVA)
Submitted by: Matt Gerstenberger
- What are the scientific questions to be answered by experiments involving your model?
- Should we continue exploring this model?
- Does the model provide additional information over any other model?
- How much is the variability in performance across magnitude, space and time?
- How does the model perform in low seis vs high seis regions?
- Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
- internal; no idea; either; no; operating in two csep testing centres, with similar in a third
- Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
- hypocenters or epicenters; rates/probs; any forecast horizon or update
- Required data inputs to compute forecast?
- standard eq catalogue: time, lat/lon/depth, mag
- Authoritative data source for testing forecasts?
- standard csep
- Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
- standard csep
- Specific ideas for Retrospective/Prospective testing? Timescales?
- understanding spatial variability across models - both within "sequence" and across regions. weeks to multiple years/decades
- exploring retrospective testing to understand variability in performace
- Community involvement? competing models? New models/extant CSEP models?
- all other short, medium and long-term models.
General Responses
Submitted by: Warner Marzocchi
- What are the scientific questions to be answered by experiments involving your model?
- I think that the most important issue has to remain the forecast. This is mostly what society asks us. Of course, we may also implement tests for specific scientific hypothesis but we have to be aware than society needs forecasts and hazard.
- Model software requirements? Will the model be computed internal/external to CSEP? Required computing/memory/storage? Automated/On-demand? Versioned code? Status of code?
- I think it would be good if any model will provide simulated catalogs from which we may calculate any statistics we want. This allows us to overcome many shortcomings of the current tests, such as the independence, the Poisson assumption and including epistemic uncertainty.
- I would be flexible about where the model is stored. I prefer inside CSEP but I would allow external models to be part of the game if this allows us to increment the number of CSEP people.
- Object of forecast? Epicenters, Hypocenters, Faults? Cast as rates/probabilities, Sets of simulated catalogs? Forecast horizons, updates?
- Simulated catalogs
- Required data inputs to compute forecast?
- Seismic catalog
- Authoritative data source for testing forecasts?
- The same as in CSEP1
- Available/unavailable tests, metrics etc. If unavailable, what scientific/computational developments need to occur to implement these tests?
- I would implement more scoring systems as the weather forecasting community is doing.
- Specific ideas for Retrospective/Prospective testing? Timescales?
- The short-term is important
- Community involvement? competing models? New models/extant CSEP models?
- I would definitely include some procedures to create ensemble models using the scores.