Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.


Are you an Illinois researcher looking for access to compute or support resources? We have great news — there is now an easy and free way to get access to computing and support resources on campus, through the Illinois Research Computing and Data effort.  Connect with us to get started!

For more information, see our Illinois Computes page.

Table of Contents

Table of Contents
maxLevel1
excludeTable of Contents

...

Illinois Allocation Requests

The table below contains resources available to Illinois users through the NCSA XRAS portalFor a complete list of resources, view the NCSA Resources section below.

To get started with Illinois allocations, request an NCSA Kerberos account at this link.  Account creation may take up to 24 hours once requested.

Once your user account has been created, you can submit proposals for open Illinois allocation requests by visiting the NCSA XRAS Submitproposal submission portal.

Open
Resource
Allocations
Open Request PeriodAccess

Delta

IllinoisJune 1 to August 21

Open Continuously

Allocation awarded by NCSA to University of Illinois Urbana campus researchers

See the Delta Allocations wiki page for more details.

Radiant

Open Continuously

Resources available based on service fee. 

See the Radiant wiki page for more details.

HOLL-IOpen Continuously

Resources available based on service fee. 

...

See the HOLL-I wiki page for more details.

NightingaleOpen Continuously

Resources available based on service fee, for projects handling sensitive data.

See the Nightingale read-the-docs page for more details.

HydroOpen Continuously

Resources available based on service fee.

To inquire about access, please contact Hydro Support (help+hydro@ncsa.illinois.edu).

ACCESS Allocation Requests

To get started with XSEDE ACCESS allocations, see the Getting Get Started with XSEDE ACCESS page on the XSEDE User PortalACCESS website.

XSEDE ACCESS resources are allocated through Research and Education allocations on a quarterly allocation schedule:

...

on an ongoing basis through the following allocations opportunities: Explore, Discover, and Accelerate; and periodically through the Maximize allocations review process. Each opportunity increases with the amount you are able to request. See the following website for more information: https://allocations.access-ci.org/prepare-requests

Delta is currently the only NCSA resource available through ACCESS.  For a complete list of resources available through ACCESS, see the Available Resources page on the ACCESS website. 

 

Note that new users are strongly encouraged to seek a Startup Allocation before requesting a Research Allocation.

...

...

NCSA Resources

NCSA offers access to a variety of resources that can be requested through the XSEDE program, or, by University of Illinois users through our Illinois allocations, or, through the ACCESS program.

Name/URLTypeDescriptionPrimary Use CasesHardware/Storage
Allocation Period
AccessUser Documentation
User
and Support






Delta

XSEDE






HPC





A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio.






Coming soon!

  • 124 CPU nodes
  • 100 quad A100 GPU nodes
  • 100 quad A40 GPU nodes
  • Five eight-way A100 GPU nodes
  • One MI100 GPU node
  • Eight utility nodes will provide login access, data transfer capability and other services
  • 100 Gb/s HPE SlingShot network fabric
  • 7 PB of disk-based Lustre storage
  • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021
XSEDE Quarterly Allocation




Allocation awarded by

XSEDE

Getting Started on XSEDE

University of Illinois or the ACCESS program - see the Delta Allocations wiki page for more details






Delta User Guide

help@ncsa.illinios.edu

help@xsede.org

Delta Illinois 

HPC

A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio.

Coming soon!

  • 124 CPU nodes
  • 100 quad A100 GPU nodes
  • 100 quad A40 GPU nodes
  • Five eight-way A100 GPU nodes
  • One MI100 GPU node
  • Eight utility nodes will provide login access, data transfer capability and other services
  • 100 Gb/s HPE SlingShot network fabric
  • 7 PB of disk-based Lustre storage
  • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021

Biannual Delta Illinois Allocation Period

Allocation awarded by NCSA - see Illinois Allocations section below

Coming soon!

Coming soon!



Radiant



HPC



Radiant is a new private cloud computing service operated by NCSA for the benefit of NCSA and UI faculty and staff.  Customers can purchase VM's, computing time in cores, storage of various types and public IP's for use with their VM's.




Radiant Use Cases

  • 140 nodes
  • 3360 cores
  • 35TB Memory
  • 25GbE/100GbE backing network
  • 185TB Usable flash capacity
  • access to NCSA’s 10PB+ (and growing) center-wide storage infrastructure/archive

Open Continuously


Cost varies by the Radiant resource requested - see the Radiant wiki page for more details



Radiant

User Documentation

help@ncsa.illinios.edu



HOLL-I



AI-HPC

HOLL-I (Highly Optimized Logical Learning instrument)

This a batch computing cluster which provides access to a Cerebras CS-2 Wafer Scale Engine for high performance Machine Learning work. It will have local home storage in addition to access to the Taiga center-wide storage system.

Extreme Scale Machine Learning with select Tensorflow and Pytorch models

  • Cerebras CS-2 Wafer Scale Engine
    • 850,000 cores
    • 40GB RAM
    • 1200Gbe connectivity
  • 9 Service nodes
  • TAIGA project space


Access and costs listed in HOLL-I User Documentation


HOLL-I User Documentation

help@ncsa.illinios.edu

Radiant





NCSA Illinois Campus Cluster Investment






HPC




NCSA has purchased 20 nodes that affiliates may request access to: https://campuscluster.illinois.edu/new_forms/user_form.php

Alternatively, individuals, groups, and campus units can invest in compute and storage resources on the cluster or purchase compute time on demand or storage space by the terabyte/month.






ICCP Use Cases

  • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), Tesla K40M GPU
  • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), No GPU
  • 4 nodes with: 256GB memory, InfiniBand interconnect, 24 cores (E2690V3 CPU),
 No GPUOpen Continuously
  •  No GPU





Cost to purchase nodes, storage, or usage on-demand





Illinois Campus Cluster Program Resources

help@campuscluster.illinois.edu

Illinois Computes Campus Cluster Investment

HPC

Illinois Computes has purchased 16 nodes that join the previous NCSA investment into Campus Cluster. The NCSA queue has merged into the IllinoisComputes queue on Campus Cluster


ICCP Use Cases

  • 16 nodes with: 512GB memory, 128 cores/node (AMD 7713 CPU)
  • 4 nodes each with 4 Nvidia A100s arriving soon. 

Illinois Computes access request


Illinois Campus Cluster Program Resources

help@campuscluster.illinois.edu



Illinois HTC Program





HTC


The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services, and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs.




The HTC service is not intended to run MPI jobs

  • 300 compute nodes with 12-core Intel Xeon
X5650 @2
  • X5650 @2.
67GHz 
  • 67GHz and 24 GB RAM.
Of those, ~2
  • (2 have 48 GB RAM and
~1 have
  • 1 has 96 GB RAM)

Open Continuously


Allocation awarded by University of Illinois Urbana campus



HTC User Documentation

htc@lists.illinois.edu








Nightingale








HIPAA HPC

HIPAA secure computation environment

Open ContinuouslyCost to purchase nodes and storage

XSEDE Startup Allocation

HPC

Startup allocations, along with Trial allocations, are one of the fastest ways to gain access to and start using XSEDE-allocated resources. We recommend that all new XSEDE users begin by requesting Startup allocation.

XSEDE Use Cases

XSEDE ecosystemOpen ContinuouslyAllocation awarded to new users by XSEDEGetting Started on XSEDEhelp@xsede.orgCampus Champion Allocation 

HPC

Your local Campus Champion can share their XSEDE allocation, find out who your local Campus Champion is

XSEDE Use Cases

XSEDE ecosystemOpen ContinuouslyAllocation awarded by your Campus ChampionGetting Started on XSEDEhelp@xsede.org

XSEDE Research Allocation

HPC

The XSEDE ecosystem encompasses a broad portfolio of resources operated by members of the XSEDE Service Provider Forum. These resources include multi-core and many-core high-performance computing (HPC) systems, distributed high-throughput computing (HTC) environments, visualization and data analysis systems, large-memory systems, data storage, and cloud systems. Some resources provide unique services for Science Gateways. Some of these resources are made available to the user community through a central XSEDE-managed allocations process, while many other resources operated by Forum members are linked to other parts of the ecosystem.

XSEDE Use Cases

XSEDE ecosystem

XSEDE Quarterly Allocation

Allocation awarded by XSEDE

Getting Started on XSEDE

help@xsede.org

XSEDE Education Allocation

HPCEducation allocations are for academic courses or training activities that have specific begin and end dates. Instructors may request a single resource or a combination of resources. Education requests have the same allocation size limits as Startup requests; per resource limits are in the Startup Limits table. As with Startup requests, Educational requests are limited to no more than three separate computational resources, unless the abstract explicitly justifies the need for each resource to the reviewers' satisfaction.

XSEDE Use Cases

XSEDE ecosystem

Open Continuously

Allocation awarded by XSEDE

Getting Started on XSEDE

help@xsede.org

Research IT Software Collaborative ServicesSupportGetting Hands-On Programming Support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways, and workflows





Nightingale is a high-performance compute cluster for sensitive data. It offers researchers a secure system for data storage and powerful computation.  The system is compliant with the Health Insurance Portability and Accountability Act (HIPAA) privacy and security rules for using Protected Health Information (PHI). Nightingale is not limited to the health domain and accommodates projects that require this amount of security or less, such as compliance with Controlled Unclassified Information (CUI) policies. It resides in the NCSA National Petascale Facility and is audited yearly by an outside entity to ensure secure operation (SOC 2, Type 2).






Projects working with HIPAA, CUI, and other protected or sensitive data.

  • 4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM
  • 6 interactive nodes with 1 A100, dual 32-core AMDs with 256GB RAM 
    5 interactive nodes with 1 A40 with dual 32-core AMDs and 512GB RAM 
  • Batch system: 16 dual 64-core AMD systems with 1 TB of RAM 
    2 dual-A100 compute nodes with 32-core AMDs and 512 GB of RAM
  • 880 TB of high-speed parallel LUSTRE-based storage







Cost to purchase nodes and storage








Nightingale Documentation

help@ncsa.illinois.edu

Research IT - Research Computing Collaborative ServicesSupportA partnership between NCSA and Research IT. Help maximize the efficiency of your computational workflows, codes, and simulations.


Coming soon!


N/A

Open Continuously

Allocation awarded by campus Research IT

Research

Software

Computing Collaborative Services

research-it@illinois.edu



Granite






Archive Storage


Granite is NCSA's Tape Archive system, closely integrated with Taiga, to provide users with a place to store longer term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3.  Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure.  

  • Storage of infrequently accessed data
  • Disaster Recovery
  • Archive Datasets
  • 19 Frame Spectra TFinity Library
  • 40PB of replicated capacity on TS1140 (JAG 7) media
  • Managed by Versity's ScoutFS/ScoutAM products.

Open Continuously

Internal Rate: $16/TB/Year

External Rate:



Contact Support



Taiga & Granite Documentation

set@ncsa.illinois.edu




Taiga




Storage

Taiga is NCSA's Global File System that is able to integrate with all non-HIPAA environments in the National Petascale Computation Facility.  Built with SSUs (Scaleable Storage Units) spec'd by NCSA engineers with DDN, it provides a center-wide, single-namespace file system that is available to use across multiple platforms at NCSA.  This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud, and container resources.  Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long term, cold storage.


  • Active Research and Project Data
  • Visualization Data
10PB External Rate:


  • 18PB of hybrid NVME/HDD storage based on two Taiga SSU's
  • Backed by HDR Infiniband
  • Running DDN's Lustre ExaScaler appliance. 

Open Continuously

Internal Rate: $32/TB/Year



Contact Support




Taiga & Granite Documentation

set@ncsa.illinois.edu

HALHPCA computer system built to efficiently run deep learning frameworks. The system consists of 16 IBM POWER9 servers with 4 NVIDIA V100 GPUs each, interconnected with Mellanox EDR InfiniBand fabric, and a DDN all-flash storage array. The system is tailored towards efficient execution of the IBM Watson Machine Learning enterprise software stack that combines popular open-source deep learning frameworks.  HAL enables scaling of deep neural networks to produce state-of-the-art performance results.


  • Deep Learning Frameworks


  • TensorFlow


  • PyTorch
  • 16 IBM POWER9 nodes with 4 NVIDIA V100 GPUs per node
  • 244 TB usable, NVME SSD-based storage by DDN
  • Peak system bandwidth ~100GB/s
  • 10GbE external
  • Dedicated 10GbE for data transfer
  • Dual-channel EDR IB internal




Fill out the HAL Request Form




HAL Documentation

help@ncsa.illinois.edu

Innovative Systems Lab (ISL
SPIN
)





HYDROHPCThe Hydro HPC cluster system is an HPC platform made available by NFI, offering CPU and GPU options for AI/ML/MPI workloads.  The GPUs available are best-available Nvidia A100 80GB variants. Priority use for NFI projects. Additional time is available.
  • 70 total nodes
  • CPU: Sandy Bridge, Rome, Milan
  • Mem: 256-384 GB per node
  • GPU: 18 Nvidia A100 (9 nodes)
  • 40-100 Gbe ethernet to WAN
  • FDR IB
  • 4 PB of Lustre-based storage
  • 2 Login nodes

Contact Support


Hydro User Documentation

help+hydro@ncsa.illinois.edu

DCCR

Open Storage Network (OSN)

VLADKingfisher



...


Please contact help@ncsa.illinois.edu if you have any questions or need help getting started with NCSA resources.

...