Quick Start
Contact Us
Request for HAL Access
Please fill out the following form. Make sure to follow the link on the application confirmation page to request an actual system account.
Submit Tech-Support Ticket
Please submit a tech-support ticket to the admin team.
Join HAL Slack User Group
Please join HAL slack user group.
Check System Status
Please visit the following website to monitor real-time system status.
User Guide
Connect to HAL Cluster
There are 2 methods to log on to the HAL system. The first method is to SSH via a terminal,
ssh <username>@hal.ncsa.illinois.edu
and the second method is to visit the HAL OnDemand webpage.
https://hal.ncsa.illinois.edu:8888
Submit Jobs Using Slurm Wrapper Suite (Recommended)
Submit an interactive job using Slurm Wrapper Suite,
swrun -p gpux1
Submit a batch job using Slurm Wrapper Suite,
swbatch run_script.swb
The run_script.swb example
#!/bin/bash #SBATCH --job-name="hostname" #SBATCH --output="hostname.%j.%N.out" #SBATCH --error="hostname.%j.%N.err" #SBATCH --partition=gpux1 srun /bin/hostname # this is our "application"
Submit Jobs Using Native Slurm
Submit an interactive job using Slurm directly.
srun --partition=gpux1 --pty --nodes=1 --ntasks-per-node=12 \ --cores-per-socket=3 --threads-per-core=4 --sockets-per-node=1 \ --gres=gpu:v100:1 --mem-per-cpu=1500 --time=4:00:00 \ --wait=0 --export=ALL /bin/bash
Submit a batch job using Slurm directly.
swbatch run_script.sb
The run_script.sb example
#!/bin/bash #SBATCH --job-name="hostname" #SBATCH --output="hostname.%j.%N.out" #SBATCH --error="hostname.%j.%N.err" #SBATCH --partition=gpux1 #SBATCH --nodes=1 #SBATCH --ntasks-per-node=1 #SBATCH --export=ALL #SBATCH -t 04:00:00 srun /bin/hostname # this is our "application"
Submit Jobs Using HAL OnDemand (New)
Log in with your own user name and password.
https://hal.ncsa.illinois.edu:8888
Files Apps
Jobs Apps
Clusters Apps
Documentation
System Overview
Hardware Information
Login node
Login Node | IBM | 9006-12P | 1x |
---|---|---|---|
CPU | IBM | POWER9 16 Cores | 2x |
Network | Mellanox | 2 Ports EDR InfiniBand | 1x |
Compute node
Compute Node | IBM | AC922 8335-GTH | 16x |
---|---|---|---|
CPU | IBM | POWER9 20 Cores | 2x |
GPU | NVidia | V100 16GB Memory | 4x |
Network | Mellanox | 2 Ports EDR InfiniBand | 1x |
Storage Node
Storage Node | IBM | 9006-22P | 1x |
---|---|---|---|
CPU | IBM | POWER9 20 Cores | 2x |
Storage | WD | NFS | 1x |
Network | Mellanox | 2 Ports EDR InfiniBand | 1x |
Software Information
Manufacturer | Software Package | Version |
---|---|---|
IBM | RedHat Linux | 7.6 |
NVidia | CUDA | 10.1.105 |
NVidia | PGI Compiler | 19.4 |
IBM | Advance Toolchain | 12.0 |
IBM | XLC/XLF | 16.1.1 |
IBM | PowerAI | 1.6.1 |
SchedMD | Slurm | 19.05.2 |
OSC | Open OnDemand | 1.6.20 |