CyberShake Testing

From SCECpedia
Revision as of 19:13, 20 February 2011 by Maechlin (talk | contribs)
Jump to navigationJump to search

Computational scale and complexity of the SCEC CyberShake system requires automated and repeatable system-level testing capabilities. The CyberShake testing must be capable of end-to-end testing, showing that all elements, inputs, earth models, computational codes, and data processing and reduction codes all work together.

The CyberShake Testing system combines a distributed workflow-based HPC software testing harness together with a database of reference problems and expected solutions.

CyberShake Testing Requirements

  1. Must support broad range of HPC codes including calculations too large to run on SCEC computers
  2. Must perform end-to-end calculations, capable of performing a series of calculations
  3. Must support multiple test evaluations including file-based comparisons, integer and floating-point with tolerance comparisons, and relational database entry comparisons.
  4. Must be modular, capable of testing performance of alternative codes on equivalent calculations
  5. Must support and help manage a test repository that contains reference problems and reference results
  6. Must support provisioning of, and job submission to, multiple HPC resource providers including USC HPCC, TeraGrid, and possibly DOE computer resources
  7. Must provide well-define metadata description of every test result that describes the code under test, the input parameters used, the reference results used in comparisons, and final test results

Required Evaluation Tests

  1. Rupture Generator
  2. SGT Calculation
  3. Mesh Maker
  4. Distance Calculation
  5. Site-Rupture Set Determination
  6. Ten moderate earthquake distributed around California
  7. List of Sites

CyberShake Test Harness

We required a workflow-based system capable of automating multiple CyberShake HPC calculations. This is modeled on the virtual data processing model of Pegasus.

CyberShake Test Oracle

We require a reference database that describes specific test problems, describes the input files, and output files, and defines a list of expected results.

References

  1. Maechling, P., Deelman, E., Cui, Y. (2009), Implementing Software Acceptance Tests as Scientific Workflows, Proceedings of the International Conference on Parallel and Distributed Processing Techniques and Applications, Hamid R. Arabnia (Ed.): PDPTA 2009, Las Vegas, Nevada, USA, July 13-17, 2009, 2 Volumes. CSREA Press 2009, ISBN 1-60132-123-6, pp. 317-323

Related Entries

See Also

Main Page