...
Partition Name | Priority | Max Walltime | Nodes Allowed | Min-Max CPUs Per Node Allowed | Min-Max Mem Per Node Allowed | GPU Allowed | Local Scratch | Description | ||
---|---|---|---|---|---|---|---|---|---|---|
gpux1 | normal | 72 hrs | 1 | 18-36 | 18-54 GB | 1 | none | designed to access 1 GPU on 1 node to run sequential and/or parallel job. | ||
gpux2 | normal | 72 hrs | 1 | 24-72 | 36-108 GB | 2 | none | designed to access 2 GPUs on 1 node to run sequential and/or parallel job. | ||
gpux3 | normal | 72 hrs | 1 | 36-108 | 54-162 GB | 3 | none | designed to access 3 GPUs on 1 node to run sequential and/or parallel job. | ||
gpux4 | normal | 72 hrs | 1 | 48-144 | 72-216 GB | 4 | none | designed to access 4 GPUs on 1 node to run sequential and/or parallel job. | ||
gpux8 | normal | 72 hrs | 2 | 48-144 | 72-216 GB | 8 | none | designed to access 8 GPUs on 2 nodes to run sequential and/or parallel job. | ||
gpux12 | normal | 72 hrs | 3 | 48-144 | 72-216 GB | 12 | none | designed to access 12 GPUs on 3 nodes to run sequential and/or parallel job. | ||
gpux16 | normal | 72 hrs | 4 | 48-144 | 72-216 GB | 16 | none | designed to access 16 GPUs on 4 nodes to run sequential and/or parallel job. | ||
cpu_mini | normal | 72 hrs | 1 | 8-8 | none | |||||
cpun1 | normal | 72 hrs | 1 | 96-96 | 144-144 GB | 0 | none | designed to access 96 CPUs on 1-16 node to run sequential and/or parallel job. | ||
cpun2 | normal | 72 hrs | 2 | none | ||||||
cpun4 | normal | 72 hrs | 4 | none | ||||||
cpun8 | normal | cpun8 | 72 hrs | 8 | none | |||||
cpun16 | normal | 72 hrs | 16 | nonenormal |
HAL Wrapper Suite Example Job Scripts
...