...
- swrun -p <partition_name> -c <cpu_per_gpu> -t <walltime> -r <reservation_name>
- <partition_name> (required) : cpun1, cpun2, cpun4, cpun8, gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
- <cpu_per_gpu> (optional) : 12 16 cpus (default), range from 12 16 cpus to 36 40 cpus.
- <walltime> (optional) : 24 hours (default), range from 1 hour to 72 hours.
- <reservation_name> (optional) : reservation name granted to user.
- example: swrun -p gpux4 -c 36 40 -t 72 (request a full node: 1x node, x4 nodegpus, 144x 160x cpus, 72x hours)
- swbatch <run_script>
- <run_script> (required) : same as original slurm batch.
- <job_name> (optional) : job name.
- <output_file> (optional) : output file name.
- <error_file> (optional) : error file name.
- <partition_name> (required) : cpun1, cpun2, cpun4, cpun8, gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
- <cpu_per_gpu> (optional) : 12 16 cpus (default), range from 12 16 cpus to 36 40 cpus.
- <walltime> (optional) : 24 hours (default), range from 1 hour to 72 hours.
- <reservation_name> (optional) : reservation name granted to user.
example: swbatch demo.swb
Code Block language bash title demo.swb #!/bin/bash #SBATCH --job-name="demo" #SBATCH --output="demo.%j.%N.out" #SBATCH --error="demo.%j.%N.err" #SBATCH --partition=gpux1 srun hostname
- swqueue
- example: swqueue
...
Script Name | Job Type | Partition | Walltime | Nodes | CPU | GPU | Memory | Description |
---|---|---|---|---|---|---|---|---|
run_gpux1_16cpu_24hrs.sh | interactive | gpux1 | 24 hrs | 1 | 16 | 1 | 19.2 GB | submit interactive job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition. |
run_gpux2_32cpu_24hrs.sh | interactive | gpux2 | 24 hrs | 1 | 32 | 2 | 38.4 GB | submit interactive job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition. |
sub_gpux1_16cpu_24hrs.sbswb | batch | gpux1 | 24 hrs | 1 | 16 | 1 | 19.2 GB | submit batch job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition. |
sub_gpux2_32cpu_24hrs.sbswb | batch | gpux2 | 24 hrs | 1 | 32 | 2 | 38.4 GB | submit batch job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition. |
sub_gpux4_64cpu_24hrs.sbswb | batch | gpux4 | 24 hrs | 1 | 64 | 4 | 76.8 GB | submit batch job, 1x node for 24 hours w/ 48x CPU 4x GPU task in "gpux4" partition. |
sub_gpux8_128cpu_24hrs.sbswb | batch | gpux8 | 24 hrs | 2 | 128 | 8 | 153.6 GB | submit batch job, 2x node for 24 hours w/ 96x CPU 8x GPU task in "gpux8" partition. |
sub_gpux16_256cpu_24hrs.sbswb | batch | gpux16 | 24 hrs | 4 | 256 | 16 | 153.6 GB | submit batch job, 4x node for 24 hours w/ 192x CPU 16x GPU task in "gpux16" partition. |
...