For a complete machine list please see here
- This cluster consists of 750 CPU cores and a total of 7936 GB RAM including:
- A Master Node to submit projects with 2 Intel Xeon 6-core processors and 128 GB RAM
- A Sandbox Node to test code and other projects with 2 Intel Xeon 6-core processors and 256 GB RAM
- 42 regular compute nodes
- 22 nodes with 2 x Intel Xeon 6-core processors and 256 GB RAM
- 7 nodes with 2 x Intel Xeon 6-core processors and 64 GB RAM
- 1 node with 2 x Intel Xeon 14-core processors and 64 GB RAM
- 12 nodes with 2 x Intel Xeon 8-core processors and 64 GB RAM
- 12 GPU compute notes
- 6 nodes with 2 x Intel Xeon 6-core processors, 64 GB RAM, and 3 x GTX 780 GPUs
- 4 nodes with 2 x Intel Xeon 6-core processors, 64 GB RAM, and 4 x GTX 1080 GPUs
- 1 node with 2 x Intel Xeon 6-core processors, 64 GB RAM, 3 x GTX 780 GPUs, and 1 x Titan X
- 1 node with 2 x Intel Xeon 6-core processors, 64 GB RAM, 3 x GTX 780 GPUs, and 1 x Tesla K40
- 44 TB storage
- Infiniband interconnect
- Univa grid engine
The Keck cluster uses Univa Grid Engine, which is one variant of the grid engine, as its general job scheduler. This is the software that reserves your resources, and runs the jobs that you submit. When we generally speak of using the cluster, it is most often meant that you are using the job scheduler to interact with the cluster. There are multiple online tutorials for grid engine online, or you can look directly at Univa’s Users Guide. However, the best way to get started is to come talk with us in person and we can walk you through logging in and running your first few jobs.
Software is installed as and when requested by the users. In the case of non-freeware software, users should ensure that the software is licensed appropriately. Please contact us if you need help getting something installed. Most software can be found in /usr/local. Programs in bold are loaded and accessed with user modules and the latest versions can be seen by typing “module avail” in the command prompt.
- ANSYS (research license) including:
- Cuda for GPU programming
- GCC (Multiple Versions)
- LAMMPS (with GPU support)
- MATLAB (Multiple Versions) (research license)
- Openmpi (Multiple Versions)
- Python with numerous modules such as scipy, numpy, etc.
Getting Started Overview (Overview In Flow Chart Form)
- Request an account by contacting us. This user account is separate from your Engineering account.
- Once you receive an account, connect to the submit host for the cluster.
- Write your code or create input files for the applications you wish to use.
- Write a submit file to submit the job.
- Submit the job using the qsub command.
- (Optional) Check the status of the job while its running using the qstat command.
- (Optional) Log out, and your job will continue to run.
- Check your output.
Try it yourself!
Connect to the cluster and then enter the commands below to submit a simple test Matlab job.
module load apps/matlab
mkdir ~/my_matlab_job && cd ~/my_matlab_job
cp /usr/local/examples/matlab/* ~/my_matlab_job
That’s it. Check out my_matlab_output.txt for results. You can use the files copied as a starting point, or find more examples by navigating to
/usr/local/examples on the cluster.
- Getting Connected to the Keck Cluster
- Job Submission (A More Detailed Guide)
- The Cyberinfrastructure Tutor.
- Message Passing Interface (MPI)
- A Portable Implementation of MPI (An old paper but a good starting point)
Parallel Jobs in MATLAB
- Parallel Computing Toolbox
- The Mathworks: Webinars.
- The Mathworks site has a number of very useful webinars on parallel computing and GPU computing. Create a user account to view these.
Parallel Jobs in Fluent
- The user guide for Fluent is not available online. It can be accessed when you click the Help button when starting fluent. It gives a detailed and clear explanation of running jobs in Fluent.
Univa Grid Engine