ats csu

Machine Configuration How To Run On It

saddleback

8-nodes, 192-core
3.06GHz Intel 6-core Xeons
12M Cache per proc
48GB RAM per node
1TB drive per node
40Gb/s Infiniband network
CentOS Linux 5.5
PGI Cluster Dev Kit with Fortran & PGI mpich
OpenFabrics Infiniband libraries
Totalview

Saddleback is a Randall group-owned linux cluster. The master node is "hogback". The master is not a compute-node. Subordinate compute nodes are named sb1, sb2...
Cluster runs torque which is an open portable batch (pbs) system. To request an account, email Kelley.

An ~80TB RAID is attached via Infiniband to hogback. It is accessible from other computers upon request. It is called /pond.
A 164TB RAID is on the network in the same rack. It is accessible from other computers upon request and is called /pool.

Specifics and install notes on the cluster can be found here.

Keck cluster

484 Intel Sandy Bridge cores
31 compute nodes
8 GPU nodes
5,120GB RAM
52TB storage
Infiniband interconnect
Univa Grid Engine schduling

The College of Engineering Keck cluster is a "condo" model system housed in the state-of-the-art data center atop the Scott Bio Engineering building on main campus. All College software is available on it including Matlab.

To learn more about the Keck cluster, click here

cray & summit

cray:
Model XE6
2,688 compute cores
2.5TB main memory
3D-torus interconnect
23TB disk storage

summit:
380 Intel Haswell nodes, 303.2 Peak TF
10 GPU nodes, 15.3 CPU Peak TF + 71.1 GPU Peak TF
5 "Hi-Mem" nodes, 6.5 Peak TF
OmniPath interconnect
1PB scratch storage
Stored at CU-Boulder
Fiber-connected to CSU's network

The CSU ISTeC Cray High Performance Computing System supports large and complex problems in science and engineering, especially for data intensive applications; adds greater physical fidelity to existing models; facilitates the application of computing to new areas of research and discovery; and supports training to attract new researchers to computational science, engineering and mathematics.

The CSU/CU "condo" model summit HPC system is available for use via an application process.

To learn more about the cray and summit, click here

National labs and supercomputer centers

The Randall group heavily utilizes supercomputers residing at national labs and supercomputer centers around the country including yellowstone and cheyenne at NCAR's computer center in Cheyenne, Wyoming. Talk to Kelley to get more info on what is out there.