2016 CyberShake database migration

From SCECpedia
Revision as of 16:22, 13 July 2016 by Scottcal (talk | contribs)
Jump to navigationJump to search

To clarify terminology:

"Input data": Rupture data, ERF-related data, sites data. This data is shared between studies.

"Run data": What parameters are used with each run, timestamps, systems, study membership. A run is only part of a single study.

"Output data": Peak amplitudes data

Goals of DB Migration

  • Provide improved read performance for users of CyberShake data
  • Separate production data from data from completed studies
  • Permit easy extension to support UGMS web site

Status of DB resources following migration

  • Swapped hardware between moment and focal
  • On read-only server, 2 databases: 1 with Study 15.4, and 1 with Study 15.12 data.
  • On production server, 1 database with all CyberShake data, including Study 15.12 and 15.4
  • After the above is complete, migrate older studies to alternative format and delete from production server.

Detailed Procedure for CyberShake DB Migration

  1. Run mysqldump on entire DB on focal. Generate dumpfiles for all the input data, each study's output and runs data, and the runs and output data which is not part of any study.
  2. Delete database on moment.
  3. Reconfigure DB on moment (single file per table, etc.)
  4. Load Study 15.12, 15.4, non-study data into DB on moment using the InnoDB engine.
  5. Confirm the reload into moment was successful.
  6. Convert older study data from MySQL dump file into SQLite format.
  7. Confirm the reloads into SQLite format were successful.
  8. Delete database on focal.
  9. Load input data, Study 15.12 runs+output data, and Study 15.4 runs+output data onto focal for read-only access, using the MyISAM engine. Each study is in a separate database.
  10. Swap names of focal and moment so we don't have to change all our scripts.

Since the input data is much smaller (~100x) than the output data, we will keep a full copy of it with each study. It would be much more time intensive to identify which subset of input data applies just to the study and the extra space needed to keep it all is trivial. However, for each study, we will only keep the runs data for runs which are associated with that study.