Engineering Network Services - CSU
Engineering Compute Clusters
This compute cluster is part of the College of Engineering's high performance computing (HPC) resources for scientific research and teaching. It consists of state of the art computing nodes, high compute power and memory, scheduling software, and general purpose engineering applications.
W. M. Keck High Performance Computing Cluster
- For a complete machine list please see here
- This cluster consists of 750 CPU cores and a total of 7936 GB RAM including:
- A Master Node to submit projects with 2 Intel Xeon 6-core processors and 128 GB RAM
- A Sandbox Node to test code and other projects with 2 Intel Xeon 6-core processors and 256 GB RAM
- 42 regular compute nodes
- 22 nodes with 2 x Intel Xeon 6-core processors and 256 GB RAM
- 7 nodes with 2 x Intel Xeon 6-core processors and 64 GB RAM
- 1 node with 2 x Intel Xeon 14-core processors and 64 GB RAM
- 12 nodes with 2 x Intel Xeon 8-core processors and 64 GB RAM
- 12 GPU compute notes
- 6 nodes with 2 x Intel Xeon 6-core processors, 64 GB RAM, and 3 x GTX 780 GPUs
- 4 nodes with 2 x Intel Xeon 6-core processors, 64 GB RAM, and 4 x GTX 1080 GPUs
- 1 node with 2 x Intel Xeon 6-core processors, 64 GB RAM, 3 x GTX 780 GPUs, and 1 x Titan X
- 1 node with 2 x Intel Xeon 6-core processors, 64 GB RAM, 3 x GTX 780 GPUs, and 1 x Tesla K40
Software is installed as and when requested by the users. In the case of non-freeware software, users should ensure that the software is licensed appropriately. ENS can help with this. More information on obtaining software. Most software can be found in /usr/local. Programs in bold are loaded and accessed with user modules and the latest versions can be seen by typing "module avail" in the command prompt.
- ANSYS (research license) including:
- Cuda for GPU programming
- GCC (Multiple Versions)
- LAMMPS (with GPU support)
- MATLAB (Multiple Versions) (research license)
- Openmpi (Multiple Versions)
- Python with numerous modules such as scipy, numpy, etc.
If you have any questions, or need help submitting jobs to the clusters, please email ENS at
firstname.lastname@example.org. We are happy to help you with your HPC computing needs.
Parallel Jobs in MATLAB
Parallel Jobs in Fluent
The user guide for Fluent is not available online. It can be accessed when you click the Help button when starting fluent. It gives a detailed and clear explanation of running jobs in Fluent.
Univa Grid Engine
This document last modified Tuesday July 25, 2017