Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 8 Next »

System Description

Host name: nano.ncsa.illinois.edu

Hardware

  • SuperMicro SYS-4028GR-TR
    • X10DRG-O+-CPU motherboard
    • 2x Intel Xeon CPU E5-2620 v3 @ 2.40GHz
    • 128 GB DDR4 (8x 16 GB Micron 2133 MHz 36ASF2G72PZ-2G1A2)
    • 8 PCI-E 3.0 ports, switched
  • Mellanox MT27500 Family [ConnectX-3] QDR IB
  • 1x 256 GB Samsung SSD 850

Nodes 1-5

  • 8x AMD Fiji Radeon R9 FURY / NANO Series
    • 4096 cores
    • 4GB HBM

Node 6

Node 7

 Node 8

  • 8x NVIDIA M40 GPUs
    • 3072 cores
    • 24 GB GDDR5 

Software

  • CentOS 7
  • CUDA 8.0
  • PGI 16.10
  • Intel ICC 16

Usage notes:

  • nano is  a head node of the cluster, it should not be used for any computations!
  • nodes 1-5 do not have NVIDIA GPUs; can be used for CPU-only workloads
  • node 6 has 1 NVIDIA P100 GPU
  • node 7 has 4 NVIDIA P100 GPUs
  • node 8 has 8 NVIDIA M40 GPUs. These are very good for deep learning if code is single-precision.
  • to get access to a particular node for interactive use, please use qsub.  E.g., to get one GPU on node 7 for 1 hour for interactive use: ‘qsub -I -l nodes=nano7:gpus=1,walltime=3600
  • better yet, do not allocate nodes for interactive use, instead just submit batch jobs, see for example Job Scripts section at https://kb.iu.edu/d/avmy for details.  This is a much better way to share computing resources.
  • to see what’s running on the cluster, just run qstat
  • this is a shared resource, please keep in mind that other users are using it as well; do not take over the system beyond what you really need.
  • home directory is cross-mounted, there is also a soft link to a space put aside for you on the lustre file system. Put all your work on lustre.


Projects

 

  • No labels