...
- partition (required) : cpu_mini, cpun1, cpun2, cpun4, cpun8, cpun16. gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
- [cpu_per_gpu] (optional) : 16 cpus (default), range from 16 cpus to 40 cpus.
- [time] (optional) : 24 hours (default), range from 1 hour to 72 hours (walltime).
- [singularity] (optional): specify a singularity container(name-only) to use from the $HAL_CONTAINER_REGISTRY
- [reservation] (optional): specify a reservation name, if any.
- [version] (optional): Display Slurm wrapper suite and Slurm versions.
Example
To request a full node: 4 gpus, 160 cpus (→ 40*4 = 160 cpus) , 72 hours
Code Block |
---|
swrun -p gpux4 -c 40 -t 72 |
or using a container image (dummy.sif) on a cpu only node with default of 24 hours
Code Block |
---|
swrun -p cpun1 -s dummy |
Note: In the second case we are using a singularity container image. To run a custom container, instead of using the default location of the container registry, you can set it to your own by first exporting the environment variable
Code Block |
---|
export HAL_CONTAINER_REGISTRY="/path/to/custom/registry" |
Script Mode
swbatch [-h] RUN_SCRIPT [-v]
...