Asha Cluster Node List

Infrastructure and Control Nodes

Name
hpc-submit (master node)
hpcstore (storage)

Compute Nodes (CPU Only)

To submit your jobs to certain nodes you will need to submit your job to a particular partition which contains the node. To see a list of what partitions you have available, use the “sinfo” or the alias “overview” command on the cluster.

Name CPU (Total # of cores) Memory
coe-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
coe-node2 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
coe-node3 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
coe-node4 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
coe-node5 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
hur-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
sys-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
sys-node2 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
sys-node3 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
sys-node4 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
wei-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
wei-node2 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
wei-node3 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
wei-node4 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
mal-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
mal-node2 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
jat-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
jat-node2 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
pie-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
pie-node2 2 x Intel Xeon Gold 6148 CPU (40) 192 GB
rug-node1 2 x Intel Xeon Gold 6148 CPU (40) 192 GB

Compute Nodes (CPU + GPU)

To submit your jobs to certain nodes you will need to use the particular partition which contains the node. To see a list of what partitions you have available, use the “sinfo” command on the cluster.

Name CPU (Total # of cores) GPU Memory
sys-gpu1 2 x Intel Xeon Gold 6148 CPU (40 cores total) Tesla V100 32GB 192 GB
sys-gpu2 2 x Intel Xeon Gold 6148 CPU (40 cores total) Tesla V100 32GB 192 GB
chi-gpu1 2 x Intel Xeon Gold 6148 CPU (40 cores total) Tesla V100S 32GB 192 GB