NCSA wiki will be offline Friday, Apr 19, 2024, from 1700 hours until Sun, Apr 21, 2024 in order to upgrade Confluence.
...
New Job Queues (SWSuite only)
Info | ||
---|---|---|
| ||
Partition Name | Priority | Max Walltime | Nodes Allowed | Min-Max CPUs Per Node Allowed | Min-Max Mem Per Node Allowed | GPU Allowed | Local Scratch | Description |
---|---|---|---|---|---|---|---|---|
gpux1 | normal | 24 hrs | 1 | 16-40 | 19.2-48 GB | 1 | none | designed to access 1 GPU on 1 node to run sequential and/or parallel jobs. |
gpux2 | normal | 24 hrs | 1 | 32-80 | 38.4-96 GB | 2 | none | designed to access 2 GPUs on 1 node to run sequential and/or parallel jobs. |
gpux3 | normal | 24 hrs | 1 | 48-120 | 57.6-144 GB | 3 | none | designed to access 3 GPUs on 1 node to run sequential and/or parallel jobs. |
gpux4 | normal | 24 hrs | 1 | 64-160 | 76.8-192 GB | 4 | none | designed to access 4 GPUs on 1 node to run sequential and/or parallel jobs. |
gpux8 | normal | 24 hrs | 2 | 64-160 | 76.8-192 GB | 8 | none | designed to access 8 GPUs on 2 nodes to run sequential and/or parallel jobs. |
gpux12 | normal | 24 hrs | 3 | 64-160 | 76.8-192 GB | 12 | none | designed to access 12 GPUs on 3 nodes to run sequential and/or parallel jobs. |
gpux16 | normal | 24 hrs | 4 | 64-160 | 76.8-192 GB | 16 | none | designed to access 16 GPUs on 4 nodes to run sequential and/or parallel jobs. |
cpun1 | normal | 24 hrs | 1 | 96-96 | 115.2-115.2 GB | 0 | none | designed to access 96 CPUs on 1 node to run sequential and/or parallel jobs. |
cpun2 | normal | 24 hrs | 2 | 96-96 | 115.2-115.2 GB | 0 | none | designed to access 96 CPUs on 2 nodes to run sequential and/or parallel jobs. |
cpun4 | normal | 24 hrs | 4 | 96-96 | 115.2-115.2 GB | 0 | none | designed to access 96 CPUs on 4 nodes to run sequential and/or parallel jobs. |
cpun8 | normal | 24 hrs | 8 | 96-96 | 115.2-115.2 GB | 0 | none | designed to access 96 CPUs on 8 nodes to run sequential and/or parallel jobs. |
cpun16 | normal | 24 hrs | 16 | 96-96 | 115.2-115.2 GB | 0 | none | designed to access 96 CPUs on 16 nodes to run sequential and/or parallel jobs. |
cpu_mini | normal | 24 hrs | 1 | 8-8 | 9.6-9.6 GB | 0 | none | designed to access 8 CPUs on 1 node to run tensorboard jobs. |
...