The Modules System
The cluster uses a modules system to manage access to different software packages available on the cluster. Different users and their programs need different versions and implementations of particular tools and this is how we let users access them without messing things up for others. When you need to use a particular package, its module needs to be ‘loaded’:
module load compilers/gcc9.2.0
you can also ‘unload’ it:
module unload compilers/gcc9.2.0
If an error message is printed usually it indicates that you’ve tried loading a conflicting module. You’ll have to unload the conflicting module first, then try to load the original module again.
You can get a list of the modules you currently have loaded with the module list command:
module list
Currently Loaded Modulefiles:
1) modules 2) use.own 3) compilers/gcc9.2.0
If you want to see what modules are available, use the module avail command:
module avail
mpi/openmpi4.0.2-gcc9.2.0
modules
use.own
compilers/gcc9.2.0
mpi/mpich3.3.1-gcc9.2.0
If you are consistently loading certain modules, you’ll want to put these module load commands, just as you would in the terminal, in your ~/.bashrc file. Otherwise you’ll have to manually load the modules each time you log in to the cluster.
Environment Variables
If your job needs certain environment variables to run, you can use the normal bash syntax:
export VARIABLE=VALUE
This sets the variable only for your current session. If you set a variable in your current shell then submit a job, that variable will be passed to the node running your job.
As with modules, if you find yourself setting this variable often, it is best to set these variables in the .bashrc file located in your home folder.