Difference between revisions of "BBP Pre-release Science Review"

From SCECpedia
Jump to navigationJump to search
(Created page with "Here is a summary of what we did this time ( more than I ever was involved in before): - Fabio had looked at various products before we met and I think you should ask him what...")
 
Line 1: Line 1:
Here is a summary of what we did this time ( more than I ever was involved in before):
+
== Summary of BBP Science Review ==
- Fabio had looked at various products before we met and I think you should ask him what he did, but he did identify systematic differences (in this case with EXSIM), which he brought to my attention; we then worked together on finding the source of differences, which I felt confident we captured for the release. I asked Fabio to send me additional information to complete this investigation, but the results are there are are not a show-stopper for the release.
+
# looked at various products before we met and I think you should ask him what he did, but he did identify systematic differences (in this case with EXSIM), which he brought to my attention; we then worked together on finding the source of differences, which I felt confident we captured for the release. I asked Fabio to send me additional information to complete this investigation, but the results are there are are not a show-stopper for the release.
- We looked together at the large table (“super duper table”) and identified differences relative to last version. I was specifically looking for systematic trends in validation performance on the basis of: methods, period bands, magnitude and distances. We discussed the differences and investigated them by looking at time series and so on. I was satisfied that nothing had been “broken” or changed significantly enough to indicate that errors were introduced relative to the method, period, M, and distance aggregates below.
+
# looked together at the large table (“super duper table”) and identified differences relative to last version. I was specifically looking for systematic trends in validation performance on the basis of: methods, period bands, magnitude and distances. We discussed the differences and investigated them by looking at time series and so on. I was satisfied that nothing had been “broken” or changed significantly enough to indicate that errors were introduced relative to the method, period, M, and distance aggregates below.
- In addition, I scrolled through most data products plots for all methods to detect any suspicious trend
+
#In addition, I scrolled through most data products plots for all methods to detect any suspicious trend
- GOF plots for all methods (Part A)
+
** GOF plots for all methods (Part A)
- distance bias plots for Part A
+
** distance bias plots for Part A
- GMPE plots for all methods (Part B)
+
** GMPE plots for all methods (Part B)
- RZZ plots
+
** RZZ plots
- a subset of simulated time series (should flip through most in the future)
+
** a subset of simulated time series (should flip through most in the future)
 
+
#One remaining task is to define a subset of simulation sets to look at systematically for a pre-release review. It would be more efficient to have a set of standard problems to verify than to always rerun all the scenarios and realizations.
On task I mentioned before but never got around is to define a subset of simulation sets to look at systematically for a pre-release review. It would be more efficient to have a set of standard problems to verify than to always rerun all the scenarios and realizations. On my to-do list…
 

Revision as of 21:24, 23 March 2017

Summary of BBP Science Review

  1. looked at various products before we met and I think you should ask him what he did, but he did identify systematic differences (in this case with EXSIM), which he brought to my attention; we then worked together on finding the source of differences, which I felt confident we captured for the release. I asked Fabio to send me additional information to complete this investigation, but the results are there are are not a show-stopper for the release.
  2. looked together at the large table (“super duper table”) and identified differences relative to last version. I was specifically looking for systematic trends in validation performance on the basis of: methods, period bands, magnitude and distances. We discussed the differences and investigated them by looking at time series and so on. I was satisfied that nothing had been “broken” or changed significantly enough to indicate that errors were introduced relative to the method, period, M, and distance aggregates below.
  3. In addition, I scrolled through most data products plots for all methods to detect any suspicious trend
    • GOF plots for all methods (Part A)
    • distance bias plots for Part A
    • GMPE plots for all methods (Part B)
    • RZZ plots
    • a subset of simulated time series (should flip through most in the future)
  1. One remaining task is to define a subset of simulation sets to look at systematically for a pre-release review. It would be more efficient to have a set of standard problems to verify than to always rerun all the scenarios and realizations.