Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.
HTML
<div style="background-color: yellow; border: 2px solid red; margin: 4px; padding: 2px; font-weight: bold; text-align: center;">
The HAL documentation has moved to <a href="https://docs.ncsa.illinois.edu/systems/hal/">https://docs.ncsa.illinois.edu/systems/hal/</a>. Please update any bookmarks you may have.
<br>
Click the link above if you are not automatically redirected in 7 seconds.
</br>
</div>
<meta http-equiv="refresh" content="7; URL='https://docs.ncsa.illinois.edu/systems/hal/en/latest/user-guide/running-jobs.html'" />

Table of Contents

For complete SLURM documentation, see https://slurm.schedmd.com/. Here we only show simple examples with system-specific instructions.

...

The HAL Slurm Wrapper Suite was designed to help users use the HAL system easily and efficiently. The current version is "swsuite-v0.14", which includes

srun → srun (slurm command) → swrun : request resources to run interactive jobs.

sbatch → sbatch (slurm command) → swbatch : request resource to submit a batch script to Slurm.

squeue (slurm command) → swqueue : check current running jobs and computational resource status.


Info

The Slurm Wrapper Suite is designed with people new to Slurm in mind and simplifies many aspects of job submission in favor of automation. For advanced use cases, the native Slurm commands are still available for use.


Rule of Thumb

  • Minimize the required input options.
  • Consistent with the original "slurm" run-script format.
  • Submits job to suitable partition based on the number of GPUs needed (number of nodes for CPU partition).

Usage

Warning

Request Only As Much As You Can Make Use Of

Many applications require some amount of modification to make use of more than one GPUs for computation. Almost all programs require nontrivial optimizations to be able to run efficiently on more than one node (partitions gpux8 and larger). Monitor your usage and avoid occupying resources that you cannot make use of.

  • swrun -p <partition_name> -c <cpu_per_gpu> -t <walltime> -r <reservation_name>
    • <partition_name> (required) : cpucpun1, cpun2, cpun4, cpun8, gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
    • <cpu_per_gpu> (optional) : 12 16 cpus (default), range from 12 16 cpus to 36 40 cpus.
    • <walltime> (optional) : 24 4 hours (default), range from 1 hour to 72 hours24 hours in integer format.
    • <reservation_name> (optional) : reservation name granted to user.
    • example: swrun -q p gpux4 -c 36 40 -t 72 24 (request a full node: 1x node, x4 node4x gpus, 144x 160x cpus, 72x 24x hours)
    • Using interactive jobs to run long-running scripts is not recommended. If you are going to walk away from your computer while your script is running, consider submitting a batch job. Unattended interactive sessions can remain idle until they run out of walltime and thus block out resources from other users. We will issue warnings when we find resource-heavy idle interactive sessions and repeated offenses may result in revocation of access rights.
  • swbatch <run_script>
    • <run_script> (required) : same as original slurm batch.
    • <job_name> name> (requiredoptional) : job name.
    • <output_file> (requiredoptional) : output file name.
    • <error_file> file> (requiredoptional) : error file name.
    • <partition_name> (required) : cpucpun1, cpun2, cpun4, cpun8, gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
    • <cpu_per_gpu> (optional) : 12 16 cpus (default), range from 12 16 cpus to 36 40 cpus.
    • <walltime> (optional) : 24 hours (default), range from 1 hour to 72 24 hours in integer format.
    • <reservation_name> (optional) : reservation name granted to user.
    • example: swbatch demo.sbswb

      Code Block
      languagebash
      titledemo.sbswb
      #!/bin/bash
      
      #SBATCH --job-name="demo"
      #SBATCH --output="demo.%j.%N.out"
      #SBATCH --error="demo.%j.%N.err"
      #SBATCH --partition=gpux1
      #SBATCH --time=4
      
      srun hostname


  • swqueue
    • example: 
    • Image Added

New Job Queues (SWSuite only)

Info
titleUnder currently policy, jobs requesting more than 5 nodes will require a reservation. Otherwise, they will be held by the scheduler and will not execute.



gpux372 361083 3 job job
Partition NamePriorityMax WalltimeNodes
Allowed
Min-Max CPUs
Per Node Allowed
Min-Max Mem
Per Node Allowed
GPU
Allowed
Local ScratchDescription
gpu-debuggpux1highnormal4 24 hrs11216-144401819.2-144 48 GB41nonedesigned to access 1 GPU on 1 node to run debug jobsequential and/or parallel jobs.
gpux1gpux2normal72 24 hrs11232-36801838.4-54 96 GB12nonedesigned to access 1 GPU 2 GPUs on 1 node to run sequential and/or parallel jobjobs.
gpux2gpux3normal72 24 hrs12448-721203657.6-108 144 GB23nonedesigned to access 2 3 GPUs on 1 node to run sequential and/or parallel jobjobs.
gpux4normal24 hrs164-1605476.8-162 192 GB4nonedesigned to access 4 GPUs on 1 node to run sequential and/or parallel jobs.
gpux8normal24 hrs264-16076.8-192 GB8nonedesigned to access 8 GPUs on 2 nodes to run sequential and/or parallel jobs.
gpux4gpux12normal72 24 hrs134864-1441607276.8-216 GB4192 GB12nonedesigned to access 12 GPUs on 3 nodes to run sequential and/or parallel jobs.
gpux16normal24 hrs464-16076.8-192 GB16nonedesigned to access 4 16 GPUs on 1 node 4 nodes to run sequential and/or parallel jobjobs.
cpucpun1normal72 24 hrs196-96144-144 115.2-115.2 GB0nonedesigned to access 96 CPUs on 1 node to run sequential and/or parallel jobs.
gpux8cpun2lownormal72 24 hrs24896-1449672-216 115.2-115.2 GB80nonedesigned to access 8 GPUs 96 CPUs on 2 nodes to run sequential and/or parallel jobjobs.
gpux12cpun4lownormal72 24 hrs344896-1449672-216 115.2-115.2 GB120nonedesigned to access 12 GPUs 96 CPUs on 3 4 nodes to run sequential and/or parallel jobjobs.
gpux16cpun8lownormal72 24 hrs484896-1449672-216 GB16115.2-115.2 GB0nonedesigned to access 96 CPUs on 8 nodes to run sequential and/or parallel jobs.
cpun16normal24 hrs1696-96115.2-115.2 GB0nonedesigned to access 96 CPUs on 16 GPUs on 4 nodes to run sequential and/or parallel jobjobs.
cpu_mininormal24 hrs18-89.6-9.6 GB0nonedesigned to access 8 CPUs on 1 node to run tensorboard jobs.

HAL Wrapper Suite Example Job Scripts

New users should check the example job scripts at "/opt/samples/runscripts" and request adequate resources.

Script Name

Job Type

Partition

Walltime

NodesCPUGPU

Memory

Description
run_gpux1_
12cpu
16cpu_24hrs.shinteractivegpux124 hrs1
12
161
18
19.2 GBsubmit interactive job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition.
run_gpux2_
24cpu
32cpu_24hrs.shinteractivegpux224 hrs1
24
322
36
38.4 GBsubmit interactive job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition.
sub_gpux1_
12cpu
16cpu_24hrs.
sb
swbbatchgpux124 hrs1
12
161
18
19.2 GBsubmit batch job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition.
sub_gpux2_
24cpu
32cpu_24hrs.
sb
swbbatchgpux224 hrs1
24
322
36
38.4 GBsubmit batch job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition.
sub_gpux4_
48cpu
64cpu_24hrs.
sb
swbbatchgpux424 hrs1
48
644
72
76.8 GBsubmit batch job, 1x node for 24 hours w/ 48x CPU 4x GPU task in "gpux4" partition.
sub_gpux8_
96cpu
128cpu_24hrs.
sb
swbbatchgpux824 hrs2
96
1288
144
153.6 GBsubmit batch job, 2x node for 24 hours w/ 96x CPU 8x GPU task in "gpux8" partition.
sub_gpux16_
192cpu
256cpu_24hrs.
sb
swbbatchgpux1624 hrs4
192
25616
288
 153.6 GBsubmit batch job, 4x node for 24 hours w/ 192x CPU 16x GPU task in "gpux16" partition.

Native SLURM style

Available Queues

NamePriorityMax WalltimeMax NodesMin/Max CPUsMin/Max RAMMin/Max GPUsDescription
cpunormal24 hrs161-961.2GB per CPU0Designed for CPU-only jobs
gpunormal24 hrs161-1601.2GB per CPU0-64Designed for jobs utilizing GPUs
debughigh4 hrs11-1601.2GB per CPU0-4Designed for single-node, short jobs. Jobs submitted to this queue receive higher priority than other jobs of the same user.

Submit Interactive Job with "srun"

Code Block
srun --partition=debug --pty --nodes=1 \
     --ntasks-per-node=1216 --cores-per-socket=12=4 \
     --threads-per-core=4 --sockets-per-node=1 \
     --mem-per-cpu=15001200 --gres=gpu:v100:1 \
-t     --time 01:30:00 --wait=0 \
     --export=ALL /bin/bash

Submit Batch Job

...

Code Block
scancel [job_id] # cancel job with [job_id]

Job Queues

...

Max CPUs
Per Node

...

Max Memory
Per CPU (GB)

...

HAL Example Job Scripts

New users should check the example job scripts at "/opt/apps/samples-runscript" and request adequate resources.

...

Max
Walltime

...

PBS style

Some PBS commands are supported by SLURM.

...