Partition Name | Priority | Max Walltime | Max Nodes/Job | Description |
---|---|---|---|---|
debug | high | 12 hrs | 1 | designed to access 1 GPU to run debug or short term job |
solo | normal | 72 hrs | 1 | designed to access 1 GPU to run long term job |
batch | normal | 72 hrs | 16 | designed to access up to 64 GPUs to run parallel job |
srun --partition=debug --pty --nodes=1 \ --ntasks-per-node=8 --gres=gpu:v100:1 \ -t 01:30:00 --wait=0 \ --export=ALL /bin/bash |
sbatch [job_script] |
squeue -u [username] |
scancel -u [job_id] |
Some PBS commands are supported by SLURM.
pbsnodes |
qstat -f [job_number] |
qstat |
qdel [job_number] |
$ cat test.pbs #!/usr/bin/sh #PBS -N test #PBS -l nodes=1 #PBS -l walltime=10:00 hostname $ qsub test.pbs 107 $ cat test.pbs.o107 hal01.hal.ncsa.illinois.edu |