BBP Pre-release Science Review
Here is a summary of what we did this time ( more than I ever was involved in before): - Fabio had looked at various products before we met and I think you should ask him what he did, but he did identify systematic differences (in this case with EXSIM), which he brought to my attention; we then worked together on finding the source of differences, which I felt confident we captured for the release. I asked Fabio to send me additional information to complete this investigation, but the results are there are are not a show-stopper for the release. - We looked together at the large table (“super duper table”) and identified differences relative to last version. I was specifically looking for systematic trends in validation performance on the basis of: methods, period bands, magnitude and distances. We discussed the differences and investigated them by looking at time series and so on. I was satisfied that nothing had been “broken” or changed significantly enough to indicate that errors were introduced relative to the method, period, M, and distance aggregates below. - In addition, I scrolled through most data products plots for all methods to detect any suspicious trend - GOF plots for all methods (Part A) - distance bias plots for Part A - GMPE plots for all methods (Part B) - RZZ plots - a subset of simulated time series (should flip through most in the future)
On task I mentioned before but never got around is to define a subset of simulation sets to look at systematically for a pre-release review. It would be more efficient to have a set of standard problems to verify than to always rerun all the scenarios and realizations. On my to-do list…