Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

System Description

Host name:

141

nano.

142

ncsa.

227

illinois.

55

edu

Hardware

  • 8x SuperMicro SYS-4028GR-TR
    • X10DRG-O+-CPU motherboard
2x Intel Xeon CPU E5-2620 v3 @ 2.40GHz
    • 128 GB DDR4 (8x 16 GB Micron 2133 MHz 36ASF2G72PZ-2G1A2)
    • 8 PCI-E 3.0 ports, switched
    • Mellanox MT27500 Family [ConnectX-3] QDR IB
    • 1x 256 GB Samsung SSD 850

Nodes 1-5

  • 8x AMD Fiji Radeon R9 FURY / NANO Series
    • 4096 cores
    • 4GB HBM

Node 6

Node 7

 Node 8

  • NFS-mounted 30TB /home (2x 6-drive RAID z2 with 4TB drives)

  • GlusterFS w/ 2-node fault tolerance - 45TB usable

Software

  • CentOS 7
  • CUDA
8
  • 9.2/10.0
  • PGI 16.10
  • Intel ICC 16
  • gcc 4.8
  • gcc 5.3 via 'scl enable devtoolset-4 bash'

To request access please fill out this form. (Use the link on the confirmation page to sign up for a new account. The same link is also included in the confirmation email.)

Instructions for running Jupyter Notebooks on compute nodes

Usage notes:

  • nano (141.142.
227
  • 204.
55
  • 5) is the head node of the cluster, it should not be used for any computations!
  • nodes 1-5 do not have NVIDIA GPUs; they can be used for CPU-only workloads
  • node 6 has 1 NVIDIA P100 GPU
  • node 7 has 4 NVIDIA K40 GPUs
  • node 8 has 8 NVIDIA M40 GPUs. These are very good for deep learning if code is single-precision.
  • please
    • use qsub
    .
    • , e.g.,
      • to get one GPU and one CPU core on node 7 for 1 hour for interactive use:
        • qsub -I -l nodes=nano7:ppn=1:gpus=1,walltime=3600
     e.g.,
      • to get entire node 1 for 1 hour for exclusive interactive use:
        • qsub -I -l nodes=nano1:ppn=12,walltime=
    3600interactive jobs are limited to 12 hours maximum walltime per job. batch jobs are not limited.
        • 3600 


    • better yet, do not allocate nodes for interactive use, instead just submit batch jobs, see for example Job Scripts section at https://kb.iu.edu/d/avmy for details. This is a much better way to share computing resources. 
    • interactive jobs are limited to 12 hours maximum walltime per job.
    • batch jobs are limited to 96 hours
    • submit request to staff for longer batch jobs (up to 240 hours)
    • to see what’s running on the cluster, just run qstat
    • this is a shared resource, please keep in mind that other users are using it as well; do not take over the system beyond what you really need.
    • home directory is cross-mounted
    , but there is very limited storage size

    DL frameworks

    • TensorFlow 1.10

    Node configuration (see login message for the exact configuration):

    nano1nano2nano3nano4


    nano5nano6nano7nano8
    • 2x Intel Xeon CPU E5-2620 v3 @ 2.40GHz
    • 2x NVIDIA P100 GPUs
      •  3584 cores
      • 16 GB HBM2
    • CUDA 11.6
    • UNSCHEDULABLE - reserved for project


    Main -> Systems -> Nano

    Contact us

    Request access to ISL resources: Application

    Contact ISL staff: Email Address

    Visit: NCSA, room 30305E3050E