...
Partition Name | Priority | Max Walltime | Nodes Allowed | Min-Max CPUs Per Node Allowed | Min-Max Mem Per Node Allowed | GPU Allowed | Local Scratch | Description |
---|---|---|---|---|---|---|---|---|
gpu-debug | high | 4 hrs | 1 | 12-144 | 18-144 GB | 4 | none | designed to access 1 node to run debug job. |
gpux1 | normal | 72 hrs | 1 | 12-36 | 18-54 GB | 1 | none | designed to access 1 GPU on 1 node to run sequential and/or parallel job. |
gpux2 | normal | 72 hrs | 1 | 24-72 | 36-108 GB | 2 | none | designed to access 2 GPUs on 1 node to run sequential and/or parallel job. |
gpux3 | normal | 72 hrs | 1 | 36-108 | 54-162 GB | 3 | none | designed to access 3 GPUs on 1 node to run sequential and/or parallel job. |
gpux4 | normal | 72 hrs | 1 | 48-144 | 72-216 GB | 4 | none | designed to access 4 GPUs on 1 node to run sequential and/or parallel job. |
cpu | normal | 72 hrs | 1 | 96-96 | 144-144 GB | 0 | none | designed to access 96 CPUs on 1 node to run sequential and/or parallel job. |
gpux8 | low | 72 hrs | 2 | 48-144 | 72-216 GB | 8 | none | designed to access 8 GPUs on 2 nodes to run sequential and/or parallel job. |
gpux12 | low | 72 hrs | 3 | 48-144 | 72-216 GB | 12 | none | designed to access 12 GPUs on 3 nodes to run sequential and/or parallel job. |
gpux16 | low | 72 hrs | 4 | 48-144 | 72-216 GB | 16 | none | designed to access 16 GPUs on 4 nodes to run sequential and/or parallel job. |
HAL Wrapper Suite Example Job Scripts
New users should check the example job scripts at "/opt/samples/runscripts" and request adequate resources.
Script Name | Job Type | Partition | Walltime | Nodes | CPU | GPU | Memory | Description |
---|---|---|---|---|---|---|---|---|
run_gpux1_12cpu_24hrs.sh | interactive | gpux1 | 24 hrs | 1 | 12 | 1 | 18 GB | submit interactive job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition. |
run_gpux2_24cpu_24hrs.sh | interactive | gpux2 | 24 hrs | 1 | 24 | 2 | 36 GB | submit interactive job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition. |
sub_gpux1_12cpu_24hrs.sb | batch | gpux1 | 24 hrs | 1 | 12 | 1 | 18 GB | submit batch job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition. |
sub_gpux2_24cpu_24hrs.sb | batch | gpux2 | 24 hrs | 1 | 24 | 2 | 36 GB | submit batch job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition. |
sub_gpux4_48cpu_24hrs.sb | batch | gpux4 | 24 hrs | 1 | 48 | 4 | 72 GB | submit batch job, 1x node for 24 hours w/ 48x CPU 4x GPU task in "gpux4" partition. |
sub_gpux8_96cpu_24hrs.sb | batch | gpux8 | 24 hrs | 2 | 96 | 8 | 144 GB | submit batch job, 2x node for 24 hours w/ 96x CPU 8x GPU task in "gpux8" partition. |
sub_gpux16_192cpu_24hrs.sb | batch | gpux16 | 24 hrs | 4 | 192 | 16 | 288 GB | submit batch job, 4x node for 24 hours w/ 192x CPU 16x GPU task in "gpux16" partition. |
Native SLURM style
Submit Interactive Job with "srun"
...