A submission script is a file listing the options and execution steps you “submit” to cluster to schedule and run. The Asha cluster uses the Slurm Job Scheduler. Here is a sample script to compile and run a basic MPI program.
#!/bin/bash
#SBATCH --partition=all
#SBATCH --job-name=test-mpi
mpicc ./mpi_test.c -o mpi_test
mpiexec ./mpi_test
- The first line needs to start with the line “#! /bin/bash” line. This is called a shebang.
- Option lines start with “#SBATCH”.
- In the example we choose the “all” partition and name our job “test-mpi”
- The steps that need to be executed are written just like you would in a Unix terminal
- In the example case the two lines starting with “mpicc and mpiexec”.
After writing your submission script, you can submit it to the scheduler via the command “sbatch”. For example, “sbatch sample_submission.sh”
Common options for your submission script.
Option | Short Description | Example Line in Script |
---|---|---|
--partition |
Choose which partition (similar to a queue) of the cluster to submit your job to. More info on partitions in on the Asha cluster. | #SBATCH --partition=all |
--job-name |
Name your job to identify it during the run. | #SBATCH --job-name=test-mpi |
--ntasks |
Use this option when doing multiprocessing (MPI, Fluent, Matlab) to reserve a number of cores for your job. Can be combined with cpus-per-task option, but usually just use one or the other depending on the job. | #SBATCH --ntasks=100 |
--cpus-per-task |
Use this option when doing multithreading (OpenMP, Tensorflow) to reserve a number of cpu cores. | #SBATCH --nodes=4 |
--nodelist/--exclude |
Specify a list of nodes to run your job on, or a list of nodes to exclude from running on. Please note that --nodelist may include other nodes if your job requires more resources than are available from the list provided. If you need to restrict your job to certain nodes, use --exlcude instead. | #SBATCH --nodelist=coe-node[1-3],coe-gpu1 |
--mail-user |
Using these two options will have slurm email you at the end of your job. | #SBATCH --mail-user=myemail@colostate.edu |
--output |
Name your output file. All standard out from a node gets put into this file. | #SBATCH --output=results.txt |
--time |
Restrict your job to a certain time limit. Default is set on a per-partion basis. Time formats include "minutes", "minutes:seconds", "hours:minutes:seconds", "days-hours", "days-hours:minutes" and "days-hours:minutes:seconds". | #SBATCH --time=1-3:45" |
Important Note for Windows Users
Line endings differ between Windows and Linux. If you write your submission script without accounting for this, it may result in odd errors. There are multiple options to mitigate this:
- Use a terminal editor on the cluster.
- micro
- nano
- vim (harder to learn)
- emacs (harder to learn)
- Use a graphical editor on a Linux or OSX machine. Such as the Linux Compute Servers.
- Configure your editor in Windows to use Linux line endings. Notepad++ (not Notepad or Wordpad), and others can do this for you.
- Use the command “dos2unix <filename>” on the cluster for each file you move to the cluster.