Difference between revisions of "CSEP Minutes 06-27-2018"

From SCECpedia
Jump to navigationJump to search
(Created page with "CSEP Working Group Home Page<br><br> '''Participants''': M. Werner, D. Rhoades, and W. Savran == Minutes == * extract daily rates for each forecast a...")
 
 
(2 intermediate revisions by the same user not shown)
Line 19: Line 19:
 
* check existence of the catalog data
 
* check existence of the catalog data
 
** we would need to flag evaluations where catalog could not be downloaded
 
** we would need to flag evaluations where catalog could not be downloaded
 +
* competition for fault-based forecasts?
 +
** some hazard models have the notion of seismic regions
 +
** new zealand has the issue of faults
 +
** ucerf3 most advanced
 +
** WGSEP88 report model (D. Jackson paper in the SRL issue; good example for assessing fault based source model)
 +
 +
'''Action Items'''
 +
 +
Bill:
 +
* provide data set to verify forecasts by extracting the daily-rates for each forecast.
 +
* provide secondary data set to verify evaluations, which consists of forecasts and observations needed and the evaluations to compare against.
 +
* double check run-time estimates for CSEP1 reprocessing estimates.
 +
* generate list of days where observations are missing

Latest revision as of 18:03, 5 July 2018

CSEP Working Group Home Page

Participants: M. Werner, D. Rhoades, and W. Savran

Minutes

  • extract daily rates for each forecast and plot to verify intregity (first step)
  • verify where W, T and R tests are not counted for entire time period
  • decipher the timestamp on the evaluation file
  • how does the system put together the cumulative tests
  • are there cumulative analogs for each evaluation?
  • curated csep1.0 data might be needed to compare against csep2.0 forecasts
  • verification exercises:
    • overlapping window: ETAS extract first month of forecasts and observed catalogs and evaluations
      • RELM tests (N, L, CL, M, S)
    • start at the beginning of time to verify cumulative tests
    • write scripts for this
  • scripts to extract evaluation results
  • journal for publishing geoscience data (https://www.earth-system-science-data.net/)
  • check existence of the catalog data
    • we would need to flag evaluations where catalog could not be downloaded
  • competition for fault-based forecasts?
    • some hazard models have the notion of seismic regions
    • new zealand has the issue of faults
    • ucerf3 most advanced
    • WGSEP88 report model (D. Jackson paper in the SRL issue; good example for assessing fault based source model)

Action Items

Bill:

  • provide data set to verify forecasts by extracting the daily-rates for each forecast.
  • provide secondary data set to verify evaluations, which consists of forecasts and observations needed and the evaluations to compare against.
  • double check run-time estimates for CSEP1 reprocessing estimates.
  • generate list of days where observations are missing