Engineering Network Services - CSU

Engineering Network Services
 

Engineering Compute Clusters

This compute cluster is part of the College of Engineering's high performance computing (HPC) resources for scientific research and teaching. It consists of state of the art computing nodes, high compute power and memory, scheduling software, and general purpose engineering applications.

Specifications

W. M. Keck High Performance Computing Cluster

  • For a complete machine list please see here
  • This cluster consists of 750 CPU cores and a total of 7936 GB RAM including:
    • A Master Node to submit projects with 2 Intel Xeon 6-core processors and 128 GB RAM
    • A Sandbox Node to test code and other projects with 2 Intel Xeon 6-core processors and 256 GB RAM
    • 42 regular compute nodes
      • 22 nodes with 2 x Intel Xeon 6-core processors and 256 GB RAM
      • 7 nodes with 2 x Intel Xeon 6-core processors and 64 GB RAM
      • 1 node with 2 x Intel Xeon 14-core processors and 64 GB RAM
      • 12 nodes with 2 x Intel Xeon 8-core processors and 64 GB RAM
    • 12 GPU compute notes
      • 6 nodes with 2 x Intel Xeon 6-core processors, 64 GB RAM, and 3 x GTX 780 GPUs
      • 4 nodes with 2 x Intel Xeon 6-core processors, 64 GB RAM, and 4 x GTX 1080 GPUs
      • 1 node with 2 x Intel Xeon 6-core processors, 64 GB RAM, 3 x GTX 780 GPUs, and 1 x Titan X
      • 1 node with 2 x Intel Xeon 6-core processors, 64 GB RAM, 3 x GTX 780 GPUs, and 1 x Tesla K40
  • Storage
    • 44 TB storage
  • Connectivity
    • Infiniband interconnect
  • Scheduler
    • Univa grid engine

Software

Software is installed as and when requested by the users. In the case of non-freeware software, users should ensure that the software is licensed appropriately. ENS can help with this. More information on obtaining software. Most software can be found in /usr/local. Programs in bold are loaded and accessed with user modules and the latest versions can be seen by typing "module avail" in the command prompt.

  • ANSYS (research license) including:
    • Fluent
    • CFD
  • Atlas
  • Boost/Bjam
  • Blitz
  • Cuda for GPU programming
  • GCC (Multiple Versions)
  • GROMACS
  • LAMMPS (with GPU support)
  • MATLAB (Multiple Versions) (research license)
  • MPI
    • Mpich2
    • Mpich3
    • Mvapich2
    • Openmpi (Multiple Versions)
  • NWChem
  • PetSc
  • Phenix
  • Python with numerous modules such as scipy, numpy, etc.
  • StarCCM

Need Help?

If you have any questions, or need help submitting jobs to the clusters, please email ENS at gridhelp@engr.colostate.edu. We are happy to help you with your HPC computing needs.

Useful Links

General

MPICH

OpenMP

Parallel Jobs in MATLAB

Parallel Jobs in Fluent

The user guide for Fluent is not available online. It can be accessed when you click the Help button when starting fluent. It gives a detailed and clear explanation of running jobs in Fluent.

GPU Computing

Univa Grid Engine

 
layout image
layout image

This document last modified Tuesday July 25, 2017


Engineering Network Services home page link College of Engineering home page link