Difference between revisions of "CyberShake Computational Estimates"

From SCECpedia
Jump to navigationJump to search
(Created page with '== CyberShake Computational Estimates == All these numbers are without optimizations (other than AWP-ODC) == Southern California, 0.5 Hz (current functionality) == Sites: 223 …')
 
 
(16 intermediate revisions by 2 users not shown)
Line 1: Line 1:
== CyberShake Computational Estimates ==
+
We will describe or current best estimates for the CyberShake computational and data requirements as we progress in our simulation planning and testing. These estimates will help us identify which aspects of the CyberShake computational system needs to be optimized to work within our time and resource constraints.
  
All these numbers are without optimizations (other than AWP-ODC)
+
The UCERF 3 estimates assume that the number of ruptures increases from 15,000 to 350,000, but the number of rupture variations per rupture on average remains the same.
  
== Southern California, 0.5 Hz (current functionality) ==
+
The 0.5 Hz numbers are taken from Study 14.2.
  
Sites: 223 sites (802 on 5 km grid)
+
The node-hours are estimates based on the XE6 and XK7 nodes on Blue Waters.
  
Jobs: 190 million
+
== 1.0 Hz, 3 component ==
  
CPU-hours: 5.5 million (Ranger)
+
SGTs: At 0.5 Hz, it requires 38 GPU node-hrs per component.
 +
  (38 GPU node-hrs per component) x (3 components) x (8 times the gridpoints) x (2 times the timesteps) x (20% more efficient due to more work per GPU) = 1460 node-hrs per site.
  
Data products (seismograms, spectral acceleration): 2.1 TB
+
23% of node hours to calculate SGTs.
  
Runtime on half-Ranger: 174 hrs (7.3 days)
+
PP: At 0.5 Hz, it requires 41 CPU node-hrs per component.
 +
  (41 CPU node-hrs per components) x (3 components) x (25 times the rupture points) x (2 times the timesteps) x (20% more efficient due to rupture generator improvements) = 4920 node-hrs per site.
  
Runtime on half-Jaguar: 40 hrs (1.7 days)
+
77% of node hours to calculate PP.
  
Runtime on half-BW(=half-Mira): 10 hrs
+
Each site requires about 550,000 rupture variations (410,000 x 4/3 for rupture variations v3.3)
  
Database entries: 366 million
+
'''6380''' node-hours per 3-component site (181k core-hours)
  
== Southern California, 1.0 Hz ==
+
'''1.82M''' node-hours for standard 3-component So Cal 286-site map (51.7M core-hours)
  
AWP-ODC
+
'''5.73M''' node-hours for increased density 3-component So Cal 898-site map (162M core-hours)
  
Sites: 223 sites
+
'''8.93M''' node-hours for statewide adaptive 3-component California 1400-site map (253M core-hours)
  
Jobs: 190 million
+
== 2.0 Hz ==
  
CPU-hours: 19.3 million (Ranger)
+
SGTs: At 1.0 Hz, it requires 485 GPU node-hrs per component.
 +
  (485 GPU node-hrs per component) x (3 components) x (8 times the gridpoints) x (2 times the timesteps) = 23.3k node-hrs per site.
  
Data products (seismograms, spectral acceleration): 4.0 TB
+
PP: At 1.0 Hz, it requires 1640 CPU node-hrs per component.
 +
  (1640 CPU node-hrs per components) x (3 components) x (2 times the timesteps) = 9.8k node-hrs per site.
  
Runtime on half-Ranger: 613 hrs (25.5 days)
+
'''33.1k''' node-hours per 3-component site (686k core-hours)
  
Runtime on half-Jaguar: 142 hrs (5.9 days)
+
'''9.47M''' node-hours for standard 3-component So Cal 286-site map (196M core-hours)
  
Runtime on half-BW(=half-Mira): 35 hrs (1.5 days)
+
'''117M''' node-hours for increased density 3-component So Cal 3545-site map (2.4B core-hours)
  
Database entries: 366 million
+
'''46.3M''' node-hours for statewide adaptive 3-component California 1400-site map (960M core-hours)
 
 
== California, 0.5 Hz ==
 
 
 
=== Current software ===
 
 
 
Sites: 4240
 
 
 
Jobs: 3.6 billion
 
 
 
CPU-hours: 104.6 million (Ranger)
 
 
 
Data products (seismograms, spectral acceleration): 39.9 TB
 
 
 
Runtime on half-Ranger: 3322 hrs (138.4 days)
 
 
 
Runtime on half-Jaguar: 771 hrs (32.2 days)
 
 
 
Runtime on half-BW(=half-Mira): 192 hrs (8 days)
 
 
 
Database entries: 6.95 billion
 
 
 
=== With AWP-ODC ===
 
 
 
Sites: 4240
 
 
 
Jobs: 3.6 billion
 
 
 
CPU-hours: 83.5 million (Ranger)
 
 
 
Data products (seismograms, spectral acceleration): 39.9 TB
 
 
 
Runtime on half-Ranger: 2652 hrs (110.5 days)
 
 
 
Runtime on half-Jaguar: 616 hrs (25.7 days)
 
 
 
Runtime on half-BW(=half-Mira): 153 hrs (6.4 days)
 
 
 
Database entries: 6.95 billion
 
 
 
== California, 1.0 Hz ==
 
 
 
AWP-ODC
 
 
 
[http://hypocenter.usc.edu/research/cybershake/CA_10km_sites.png Sites: 4240]
 
 
 
Jobs: 3.6 billion
 
 
 
CPU-hours: 376.2 million (337.8 million - SGT generation only, 339.1 million - SGT workflow)
 
 
 
Data products (seismograms, spectral acceleration): 76.5 TB
 
 
 
Runtime on half-Ranger: 11947 hrs (497.8 days)
 
 
 
Runtime on half-Jaguar: 3096 hrs (129 days)
 
 
 
Runtime on half-BW(=half-Mira): 770 hrs (32.1 days) (4.3% of yearly CPU-hrs)
 
 
 
Database entries: 6.95 billion
 

Latest revision as of 15:52, 20 June 2014

We will describe or current best estimates for the CyberShake computational and data requirements as we progress in our simulation planning and testing. These estimates will help us identify which aspects of the CyberShake computational system needs to be optimized to work within our time and resource constraints.

The UCERF 3 estimates assume that the number of ruptures increases from 15,000 to 350,000, but the number of rupture variations per rupture on average remains the same.

The 0.5 Hz numbers are taken from Study 14.2.

The node-hours are estimates based on the XE6 and XK7 nodes on Blue Waters.

1.0 Hz, 3 component

SGTs: At 0.5 Hz, it requires 38 GPU node-hrs per component.

 (38 GPU node-hrs per component) x (3 components) x (8 times the gridpoints) x (2 times the timesteps) x (20% more efficient due to more work per GPU) = 1460 node-hrs per site.

23% of node hours to calculate SGTs.

PP: At 0.5 Hz, it requires 41 CPU node-hrs per component.

 (41 CPU node-hrs per components) x (3 components) x (25 times the rupture points) x (2 times the timesteps) x (20% more efficient due to rupture generator improvements) = 4920 node-hrs per site.

77% of node hours to calculate PP.

Each site requires about 550,000 rupture variations (410,000 x 4/3 for rupture variations v3.3)

6380 node-hours per 3-component site (181k core-hours)

1.82M node-hours for standard 3-component So Cal 286-site map (51.7M core-hours)

5.73M node-hours for increased density 3-component So Cal 898-site map (162M core-hours)

8.93M node-hours for statewide adaptive 3-component California 1400-site map (253M core-hours)

2.0 Hz

SGTs: At 1.0 Hz, it requires 485 GPU node-hrs per component.

 (485 GPU node-hrs per component) x (3 components) x (8 times the gridpoints) x (2 times the timesteps) = 23.3k node-hrs per site.

PP: At 1.0 Hz, it requires 1640 CPU node-hrs per component.

 (1640 CPU node-hrs per components) x (3 components) x (2 times the timesteps) = 9.8k node-hrs per site.

33.1k node-hours per 3-component site (686k core-hours)

9.47M node-hours for standard 3-component So Cal 286-site map (196M core-hours)

117M node-hours for increased density 3-component So Cal 3545-site map (2.4B core-hours)

46.3M node-hours for statewide adaptive 3-component California 1400-site map (960M core-hours)