The NCSA Allocations page has moved! Please update your bookmarks.

https://docs.ncsa.illinois.edu/en/latest/allocations/index.html

Click the link above if you are not automatically redirected in 6 seconds.

Are you an Illinois researcher looking for access to compute or support resources? We have great news — there is now an easy and free way to get access to computing and support resources on campus, through the Illinois Research Computing and Data effort.  Connect with us to get started!

For more information, see our Illinois Computes page.

Table of Contents


Illinois Allocation Requests

The table below contains resources available to Illinois users through the NCSA XRAS portalFor a complete list of resources, view the NCSA Resources section below.

To get started with Illinois allocations, request an NCSA Kerberos account at this link.  Account creation may take up to 24 hours once requested.

Once your user account has been created, you can submit proposals for open Illinois allocation requests by visiting the NCSA XRAS proposal submission portal.

ResourceOpen Request PeriodAccess

Delta

Open Continuously

Allocation awarded by NCSA to University of Illinois Urbana campus researchers. 

See the Delta Allocations wiki page for more details.

Radiant

Open Continuously

Resources available based on service fee. 

See the Radiant wiki page for more details.

HOLL-IOpen Continuously

Resources available based on service fee. 

See the HOLL-I wiki page for more details.

NightingaleOpen Continuously

Resources available based on service fee, for projects handling sensitive data.

See the Nightingale read-the-docs page for more details.

HydroOpen Continuously

Resources available based on service fee.

To inquire about access, please contact Hydro Support (help+hydro@ncsa.illinois.edu).

ACCESS Allocation Requests

To get started with ACCESS allocations, see the Get Started with ACCESS page on the ACCESS website.

ACCESS resources are allocated on an ongoing basis through the following allocations opportunities: Explore, Discover, and Accelerate; and periodically through the Maximize allocations review process. Each opportunity increases with the amount you are able to request. See the following website for more information: https://allocations.access-ci.org/prepare-requests

Delta is currently the only NCSA resource available through ACCESS.  For a complete list of resources available through ACCESS, see the Available Resources page on the ACCESS website. 

 


NCSA Resources

NCSA offers access to a variety of resources that can be requested by University of Illinois users through our Illinois allocations, or, through the ACCESS program.

Name/URLTypeDescriptionPrimary Use CasesHardware/StorageAccessUser Documentation and Support






Delta






HPC





A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio.






Coming soon!

  • 124 CPU nodes
  • 100 quad A100 GPU nodes
  • 100 quad A40 GPU nodes
  • Five eight-way A100 GPU nodes
  • One MI100 GPU node
  • Eight utility nodes will provide login access, data transfer capability and other services
  • 100 Gb/s HPE SlingShot network fabric
  • 7 PB of disk-based Lustre storage
  • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021




Allocation awarded by University of Illinois or the ACCESS program - see the Delta Allocations wiki page for more details






Delta User Guide

help@ncsa.illinios.edu



Radiant



HPC



Radiant is a new private cloud computing service operated by NCSA for the benefit of NCSA and UI faculty and staff.  Customers can purchase VM's, computing time in cores, storage of various types and public IP's for use with their VM's.




Radiant Use Cases

  • 140 nodes
  • 3360 cores
  • 35TB Memory
  • 25GbE/100GbE backing network
  • 185TB Usable flash capacity
  • access to NCSA’s 10PB+ (and growing) center-wide storage infrastructure/archive


Cost varies by the Radiant resource requested - see the Radiant wiki page for more details



Radiant User Documentation

help@ncsa.illinios.edu



HOLL-I



AI-HPC

HOLL-I (Highly Optimized Logical Learning instrument)

This a batch computing cluster which provides access to a Cerebras CS-2 Wafer Scale Engine for high performance Machine Learning work. It will have local home storage in addition to access to the Taiga center-wide storage system.

Extreme Scale Machine Learning with select Tensorflow and Pytorch models

  • Cerebras CS-2 Wafer Scale Engine
    • 850,000 cores
    • 40GB RAM
    • 1200Gbe connectivity
  • 9 Service nodes
  • TAIGA project space


Access and costs listed in HOLL-I User Documentation


HOLL-I User Documentation

help@ncsa.illinios.edu





NCSA Illinois Campus Cluster Investment






HPC




NCSA has purchased 20 nodes that affiliates may request access to: https://campuscluster.illinois.edu/new_forms/user_form.php

Alternatively, individuals, groups, and campus units can invest in compute and storage resources on the cluster or purchase compute time on demand or storage space by the terabyte/month.






ICCP Use Cases

  • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), Tesla K40M GPU
  • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), No GPU
  • 4 nodes with: 256GB memory, InfiniBand interconnect, 24 cores (E2690V3 CPU), No GPU





Cost to purchase nodes, storage, or usage on-demand





Illinois Campus Cluster Program Resources

help@campuscluster.illinois.edu

Illinois Computes Campus Cluster Investment

HPC

Illinois Computes has purchased 16 nodes that join the previous NCSA investment into Campus Cluster. The NCSA queue has merged into the IllinoisComputes queue on Campus Cluster


ICCP Use Cases

  • 16 nodes with: 512GB memory, 128 cores/node (AMD 7713 CPU)
  • 4 nodes each with 4 Nvidia A100s arriving soon. 

Illinois Computes access request


Illinois Campus Cluster Program Resources

help@campuscluster.illinois.edu



Illinois HTC Program





HTC


The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services, and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs.




The HTC service is not intended to run MPI jobs

  • 300 compute nodes with 12-core Intel Xeon X5650 @2.67GHz and 24 GB RAM. (2 have 48 GB RAM and 1 has 96 GB RAM)


Allocation awarded by University of Illinois Urbana campus



HTC User Documentation

htc@lists.illinois.edu








Nightingale








HIPAA HPC





Nightingale is a high-performance compute cluster for sensitive data. It offers researchers a secure system for data storage and powerful computation.  The system is compliant with the Health Insurance Portability and Accountability Act (HIPAA) privacy and security rules for using Protected Health Information (PHI). Nightingale is not limited to the health domain and accommodates projects that require this amount of security or less, such as compliance with Controlled Unclassified Information (CUI) policies. It resides in the NCSA National Petascale Facility and is audited yearly by an outside entity to ensure secure operation (SOC 2, Type 2).






Projects working with HIPAA, CUI, and other protected or sensitive data.

  • 4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM
  • 6 interactive nodes with 1 A100, dual 32-core AMDs with 256GB RAM 
    5 interactive nodes with 1 A40 with dual 32-core AMDs and 512GB RAM 
  • Batch system: 16 dual 64-core AMD systems with 1 TB of RAM 
    2 dual-A100 compute nodes with 32-core AMDs and 512 GB of RAM
  • 880 TB of high-speed parallel LUSTRE-based storage







Cost to purchase nodes and storage








Nightingale Documentation

help@ncsa.illinois.edu

Research IT - Research Computing Collaborative ServicesSupportA partnership between NCSA and Research IT. Help maximize the efficiency of your computational workflows, codes, and simulations.


Coming soon!


N/A

Allocation awarded by campus Research IT

Research Computing Collaborative Services

research-it@illinois.edu



Granite






Archive Storage


Granite is NCSA's Tape Archive system, closely integrated with Taiga, to provide users with a place to store longer term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3.  Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure.  

  • Storage of infrequently accessed data
  • Disaster Recovery
  • Archive Datasets
  • 19 Frame Spectra TFinity Library
  • 40PB of replicated capacity on TS1140 (JAG 7) media
  • Managed by Versity's ScoutFS/ScoutAM products.



Contact Support



Taiga & Granite Documentation

set@ncsa.illinois.edu




Taiga




Storage

Taiga is NCSA's Global File System that is able to integrate with all non-HIPAA environments in the National Petascale Computation Facility.  Built with SSUs (Scaleable Storage Units) spec'd by NCSA engineers with DDN, it provides a center-wide, single-namespace file system that is available to use across multiple platforms at NCSA.  This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud, and container resources.  Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long term, cold storage.


  • Active Research and Project Data
  • Visualization Data


  • 18PB of hybrid NVME/HDD storage based on two Taiga SSU's
  • Backed by HDR Infiniband
  • Running DDN's Lustre ExaScaler appliance. 



Contact Support




Taiga & Granite Documentation

set@ncsa.illinois.edu

HALHPCA computer system built to efficiently run deep learning frameworks. The system consists of 16 IBM POWER9 servers with 4 NVIDIA V100 GPUs each, interconnected with Mellanox EDR InfiniBand fabric, and a DDN all-flash storage array. The system is tailored towards efficient execution of the IBM Watson Machine Learning enterprise software stack that combines popular open-source deep learning frameworks.  HAL enables scaling of deep neural networks to produce state-of-the-art performance results.


  • Deep Learning Frameworks


  • TensorFlow


  • PyTorch
  • 16 IBM POWER9 nodes with 4 NVIDIA V100 GPUs per node
  • 244 TB usable, NVME SSD-based storage by DDN
  • Peak system bandwidth ~100GB/s
  • 10GbE external
  • Dedicated 10GbE for data transfer
  • Dual-channel EDR IB internal




Fill out the HAL Request Form




HAL Documentation

help@ncsa.illinois.edu

Innovative Systems Lab (ISL)





HYDROHPCThe Hydro HPC cluster system is an HPC platform made available by NFI, offering CPU and GPU options for AI/ML/MPI workloads.  The GPUs available are best-available Nvidia A100 80GB variants. Priority use for NFI projects. Additional time is available.
  • 70 total nodes
  • CPU: Sandy Bridge, Rome, Milan
  • Mem: 256-384 GB per node
  • GPU: 18 Nvidia A100 (9 nodes)
  • 40-100 Gbe ethernet to WAN
  • FDR IB
  • 4 PB of Lustre-based storage
  • 2 Login nodes

Contact Support


Hydro User Documentation

help+hydro@ncsa.illinois.edu



Please contact help@ncsa.illinois.edu if you have any questions or need help getting started with NCSA resources.



  • No labels