Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The HAL Slurm Wrapper Suite was designed to help users use the HAL system easily and efficiently. The current version is "swsuite-v0.14", which includes

srun → srun (slurm command) → swrun : request resources to run interactive jobs.

sbatch → sbatch (slurm command) → swbatch : request resource to submit a batch script to Slurm.

squeue (slurm command) → swqueue : check current running jobs and computational resource status.


Info

The Slurm Wrapper Suite is designed with people new to Slurm in mind and simplifies many aspects of job submission in favor of automation. For advanced use cases, the native Slurm commands are still available for use.


Rule of Thumb

  • Minimize the required input options.
  • Consistent with the original "slurm" run-script format.
  • Submits job to suitable partition based on the number of GPUs needed (number of nodes for CPU partition).

Usage

Warning

Request Only As Much As You Can Make Use Of

Many applications require some amount of modification to make use of more than one GPUs for computation. Almost all programs require nontrivial optimizations to be able to run efficiently on more than one node (partitions gpux8 and larger). Monitor your usage and avoid occupying resources that you cannot make use of.

  • swrun -q <queuep <partition_name> -c <cpu_per_gpu> -t <walltime> -r <reservation_name>
    • <queue<partition_name> (required) : cpucpun1, cpun2, cpun4, cpun8, gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
    • <cpu_per_gpu> (optional) : 12 16 cpus (default), range from 12 16 cpus to 36 40 cpus.
    • <walltime> (optional) : 24 4 hours (default), range from 1 hour to 72 hours24 hours in integer format.
    • <reservation_name> (optional) : reservation name granted to user.
    • example: swrun -q p gpux4 -c 36 40 -t 72 24 (request a full node: 1x node, x4 node4x gpus, 144x 160x cpus, 72x 24x hours)
    • Using interactive jobs to run long-running scripts is not recommended. If you are going to walk away from your computer while your script is running, consider submitting a batch job. Unattended interactive sessions can remain idle until they run out of walltime and thus block out resources from other users. We will issue warnings when we find resource-heavy idle interactive sessions and repeated offenses may result in revocation of access rights.
  • swbatch <run_script>
    • <run_script> (required) : same as original slurm batch.
    • <job_name> name> (requiredoptional) : job name.
    • <output_file> (requiredoptional) : output file name.
    • <error_file> file> (requiredoptional) : error file name.
    • <queue<partition_name> (required) : cpucpun1, cpun2, cpun4, cpun8, gpux1, gpux2, gpux3, gpux4, gpux8, gpux12, gpux16.
    • <cpu_per_gpu> (optional) : 12 16 cpus (default), range from 12 16 cpus to 36 40 cpus.
    • <walltime> (optional) : 24 hours (default), range from 1 hour to 72 24 hours in integer format.
    • <reservation_name> (optional) : reservation name granted to user.
    • example: swbatch demo.sbswb

      Code Block
      languagebash
      titledemo.sbswb
      #!/bin/bash
      
      #SBATCH --job-name="demo"
      #SBATCH --output="demo.%j.%N.out"
      #SBATCH --error="demo.%j.%N.err"
      #SBATCH --partition=gpux1
      #SBATCH --time=4
      
      srun hostname


  • swqueue
    • example: 
    • Image Added

New Job Queues (SWSuite only)

Info
titleUnder currently policy, jobs requesting more than 5 nodes will require a reservation. Otherwise, they will be held by the scheduler and will not execute.



4 hrs
Partition NamePriorityMax WalltimeNodes
Allowed
Min-Max CPUs
Per Node Allowed
Min-Max Mem
Per Node Allowed
GPU
Allowed
Local ScratchDescriptiongpu-debughigh
112-14418-144 GB4nonedesigned to access 1 node to run debug job.gpux1normal72 24 hrs11216-36401819.2-54 48 GB1nonedesigned to access 1 GPU on 1 node to run sequential and/or parallel jobjobs.
gpux2normal72 24 hrs12432-72803638.4-108 96 GB2nonedesigned to access 2 GPUs on 1 node to run sequential and/or parallel jobjobs.
gpux3normal72 24 hrs13648-1081205457.6-162 144 GB3nonedesigned to access 3 GPUs on 1 node to run sequential and/or parallel jobjobs.
gpux4normal72 24 hrs14864-1441607276.8-216 192 GB4nonedesigned to access 4 GPUs on 1 node to run sequential and/or parallel jobjobs.
cpugpux8normal72 24 hrs129664-96144-144 GB0nonedesigned to access 96 CPUs on 1 node to run sequential and/or parallel job.16076.8-192 gpux8low72 hrs248-14472-216 GB8nonedesigned to access 8 GPUs on 2 nodes to run sequential and/or parallel jobjobs.
gpux12lownormal72 24 hrs34864-1441607276.8-216 192 GB12nonedesigned to access 12 GPUs on 3 nodes to run sequential and/or parallel jobjobs.
gpux16lownormal72 24 hrs44864-1441607276.8-216 192 GB16nonedesigned to access 16 GPUs on 4 nodes to run sequential and/or parallel jobjobs.

Native SLURM style

Submit Interactive Job with "srun"

Code Block
srun --partition=debug --pty --nodes=1 \
--ntasks-per-node=12 --cores-per-socket=12 --mem-per-cpu=1500 --gres=gpu:v100:1 \
-t 01:30:00 --wait=0 \
--export=ALL /bin/bash

Submit Batch Job

Code Block
sbatch [job_script]

Check Job Status

Code Block
squeue                # check all jobs from all users 
squeue -u [user_name] # check all jobs belong to user_name

Cancel Running Job

Code Block
scancel [job_id] # cancel job with [job_id]

Job Queues

...

Max CPUs
Per Node

...

Max Memory
Per CPU (GB)

...

cpun1normal24 hrs196-96115.2-115.2 GB0nonedesigned to access 96 CPUs on 1 node to run sequential and/or parallel jobs.
cpun2normal24 hrs296-96115.2-115.2 GB0nonedesigned to access 96 CPUs on 2 nodes to run sequential and/or parallel jobs.
cpun4normal24 hrs496-96115.2-115.2 GB0nonedesigned to access 96 CPUs on 4 nodes to run sequential and/or parallel jobs.
cpun8normal24 hrs896-96115.2-115.2 GB0nonedesigned to access 96 CPUs on 8 nodes to run sequential and/or parallel jobs.
cpun16normal24 hrs1696-96115.2-115.2 GB0nonedesigned to access 96 CPUs on 16 nodes to run sequential and/or parallel jobs.
cpu_mininormal24 hrs18-89.6-9.6 GB0nonedesigned to access 8 CPUs on 1 node to run tensorboard jobs.

HAL Wrapper Suite

...

Example Job Scripts

New users should check the example job scripts at "/opt/appssamples/samples-runscriptrunscripts" and request adequate resources.

(GB)debug00gpu96cpu_216GB4:00:0096144 1 full 4 only debug12submit interactive job, 25% of 1 full node for 4 hours debug24submit interactive 50% of 1 full 4 debugrundebug04gpu48cpu_72GBshinteractive4:00:0048submit interactive 1 full 4 debugsubmit batch job, 25% of 1 full node for 72 hours solosubmit batch job, 16 full nodes for 72 hours GPU task in "batch" partition

Script


Name

Job


Type

Partition

Max
Walltime

NodesCPUGPU

Memory

Description
run_gpux1_16cpu_24hrs.shinteractivedebuggpux124 hrs1160119.2 GBsubmit interactive job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition.
run_debuggpux2_01gpu32cpu_12cpu_18GB24hrs.shinteractivedebuggpux224 hrs4:00:00132118238.4 GBsubmit interactive job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition.
runsub_debuggpux1_02gpu16cpu_24cpu_36GB24hrs.shswbinteractivebatchdebuggpux124 hrs4:00:00116119.2 GB36submit batch job, 1x node for 24 hours w/ 12x CPU 1x GPU task in "gpux1" partition.
sub_gpux2_32cpu_24hrs.swbbatchdebuggpux224 hrs132238.4 GB72submit batch job, 1x node for 24 hours w/ 24x CPU 2x GPU task in "gpux2" partition.
sub_sologpux4_01node_01gpu_12cpu_18GB.sbsbatchsolo64cpu_24hrs.swbbatchgpux424 hrs72:00:0011264118476.8 GBsubmit batch job, 1x node for 24 hours w/ 48x CPU 4x GPU task in "gpux4" partition.
sub_sologpux8_01node_02gpu_24cpu_36GB.sbsbatchsolo72:00:00124236submit batch job, 50% of 1 full node for 72 hours GPU task in "solo" partition
sub_solo_01node_04gpu_48cpu_72GB.sbsbatchsolo72:00:00148472submit batch job, 1 full node for 72 hours GPU task in "solo" partition
sub_ssd_01node_01gpu_12cpu_18GB.sbsbatchssd72:00:00112118submit batch job, 25% of 1 full node for 72 hours GPU task in "ssd" partition
sub_batch_02node_08gpu_96cpu_144GB.sbsbatchbatch72:00:002968144submit batch job, 2 full nodes for 72 hours GPU task in "batch" partition
sub_batch_16node_64gpu_768cpu_1152GB.sbsbatchbatch72:00:0016768641152
128cpu_24hrs.swbbatchgpux824 hrs21288153.6 GBsubmit batch job, 2x node for 24 hours w/ 96x CPU 8x GPU task in "gpux8" partition.
sub_gpux16_256cpu_24hrs.swbbatchgpux1624 hrs425616 153.6 GBsubmit batch job, 4x node for 24 hours w/ 192x CPU 16x GPU task in "gpux16" partition.

Native SLURM style

Available Queues

NamePriorityMax WalltimeMax NodesMin/Max CPUsMin/Max RAMMin/Max GPUsDescription
cpunormal24 hrs161-961.2GB per CPU0Designed for CPU-only jobs
gpunormal24 hrs161-1601.2GB per CPU0-64Designed for jobs utilizing GPUs
debughigh4 hrs11-1601.2GB per CPU0-4Designed for single-node, short jobs. Jobs submitted to this queue receive higher priority than other jobs of the same user.

Submit Interactive Job with "srun"

Code Block
srun --partition=debug --pty --nodes=1 \
     --ntasks-per-node=16 --cores-per-socket=4 \
     --threads-per-core=4 --sockets-per-node=1 \
     --mem-per-cpu=1200 --gres=gpu:v100:1 \
     --time 01:30:00 --wait=0 \
     --export=ALL /bin/bash

Submit Batch Job

Code Block
sbatch [job_script]

Check Job Status

Code Block
squeue                # check all jobs from all users 
squeue -u [user_name] # check all jobs belong to user_name

Cancel Running Job

Code Block
scancel [job_id] # cancel job with [job_id]

PBS style

Some PBS commands are supported by SLURM.

...