Difference between revisions of "Blue Waters"

From SCECpedia
Jump to navigationJump to search
Line 1: Line 1:
 
*[http://www.ncsa.illinois.edu/BlueWaters/ Blue Waters Home Page]
 
*[http://www.ncsa.illinois.edu/BlueWaters/ Blue Waters Home Page]
  
 +
== Blue Waters System Specs ==
  
https://wiki.ncsa.illinois.edu/display/BWpublic/User+Information
+
Blue Waters is designed to meet the compute-intensive, memory-intensive, and data-intensive needs of a wide range of scientists and engineers. It will deliver sustained performance of one petaflop (one quadrillion calculations per second). The Blue Waters team will demonstrate sustained-petascale performance using eight scientific applications—codes used in everything from earthquake prediction to the study of how a virus enters a cell to modeling severe storms and climate change.
 +
 
 +
* Cray XE6 cabinets: >235
 +
 
 +
* Cray XK6 cabinets: >30
 +
 
 +
* Total cabinets, including storage
 +
and server cabinets: >300
 +
 
 +
* Compute nodes: >25,000
 +
 
 +
* Usable storage bandwidth: >1 TB/s
 +
 
 +
* Aggregate system memory: >1.5 PB
 +
 
 +
* Memory per core: 4 GB
 +
 
 +
* Gemini network cables: Over 9,000 (~4,500km of single wires)
 +
 
 +
* Interconnect topology: 3D Torus
 +
 
 +
* Number of disks: >17,000
 +
 
 +
* Number of memory DIMMS: >190,000
 +
 
 +
* Usable storage: >25 PB
 +
 
 +
* Peak performance: >11.5 PF
 +
 
 +
* Number of AMD processors: >49,000
 +
 
 +
* Number of AMD x86 cores: >380,000
 +
 
 +
* Number of NVIDIA GPUs: >3,000
 +
 
 +
* External network bandwidth: 100 Gb/s scaling to 300 Gb/s
 +
 
 +
* Integrated Near Line Environment: Scaling to 500 petabytes
 +
 
 +
* Bandwidth to near-line storage: 100 GB/s
 +
 
 +
== User Access ==
 +
*[ https://wiki.ncsa.illinois.edu/display/BWpublic/User+Information User Acct Logins]

Revision as of 18:23, 3 December 2012

Blue Waters System Specs

Blue Waters is designed to meet the compute-intensive, memory-intensive, and data-intensive needs of a wide range of scientists and engineers. It will deliver sustained performance of one petaflop (one quadrillion calculations per second). The Blue Waters team will demonstrate sustained-petascale performance using eight scientific applications—codes used in everything from earthquake prediction to the study of how a virus enters a cell to modeling severe storms and climate change.

  • Cray XE6 cabinets: >235
  • Cray XK6 cabinets: >30
  • Total cabinets, including storage

and server cabinets: >300

  • Compute nodes: >25,000
  • Usable storage bandwidth: >1 TB/s
  • Aggregate system memory: >1.5 PB
  • Memory per core: 4 GB
  • Gemini network cables: Over 9,000 (~4,500km of single wires)
  • Interconnect topology: 3D Torus
  • Number of disks: >17,000
  • Number of memory DIMMS: >190,000
  • Usable storage: >25 PB
  • Peak performance: >11.5 PF
  • Number of AMD processors: >49,000
  • Number of AMD x86 cores: >380,000
  • Number of NVIDIA GPUs: >3,000
  • External network bandwidth: 100 Gb/s scaling to 300 Gb/s
  • Integrated Near Line Environment: Scaling to 500 petabytes
  • Bandwidth to near-line storage: 100 GB/s

User Access