Skip to end of metadata
Go to start of metadata

You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 55 Next »

System Description

Host name: nano.ncsa.illinois.edu

Hardware

  • 8x SuperMicro SYS-4028GR-TR
    • X10DRG-O+-CPU motherboard
    • 128 GB DDR4 (8x 16 GB Micron 2133 MHz 36ASF2G72PZ-2G1A2)
    • 8 PCI-E 3.0 ports, switched
    • Mellanox MT27500 Family [ConnectX-3] QDR IB
    • 1x 256 GB Samsung SSD 850
  • NFS-mounted 30TB /home (2x 6-drive RAID z2 with 4TB drives)

  • GlusterFS w/ 2-node fault tolerance - 62TB usable

Software

  • CentOS 7
  • CUDA 9.2/10.0
  • PGI 16.10
  • Intel ICC 16
  • gcc 4.8
  • gcc 5.3 via 'scl enable devtoolset-4 bash'

To request access please fill out this form. (Use the link on the confirmation page to sign up for a new account. The same link is also included in the confirmation email.)

Instructions for running Jupyter Notebooks on compute nodes

Usage notes:

  • nano (141.142.204.5) is the head node of the cluster, it should not be used for any computations!
  • to connect to the cluster, ssh username@nano.ncsa.illinois.edu
     
  • to get access to a particular node for interactive use, use qsub, e.g.,
    • to get one GPU and one CPU core on node 7 for 1 hour for interactive use:
      • qsub -I -l nodes=nano7:ppn=1:gpus=1,walltime=3600
    • to get entire node 1 for 1 hour for exclusive interactive use:
      • qsub -I -l nodes=nano1:ppn=12,walltime=3600 


  • better yet, do not allocate nodes for interactive use, instead just submit batch jobs, see for example Job Scripts section at https://kb.iu.edu/d/avmy for details. This is a much better way to share computing resources. 
  • interactive jobs are limited to 12 hours maximum walltime per job.
  • batch jobs are limited to 96 hours
  • submit request to staff for longer batch jobs (up to 240 hours)
  • to see what’s running on the cluster, just run qstat
  • this is a shared resource, please keep in mind that other users are using it as well; do not take over the system beyond what you really need.
  • home directory is cross-mounted and accessible from all nodes
  • Current System Status: https://nano.ncsa.illinois.edu:3000/d/3QVrDIFmz/nano-status

DL frameworks

  • TensorFlow 1.10

Node configuration (see login message for the exact configuration):

nano1nano2nano3nano4


nano5nano6nano7nano8

Main -> Systems -> Nano

Contact us

Request access to ISL resources: Application

Contact ISL staff: Email Address

Visit: NCSA, room 3050E

  • No labels