...
Name/URL | Type | Description | Primary Use Cases | Hardware/Storage | Access | User Documentation | User Support |
---|---|---|---|---|---|---|---|
HPC | A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio. | Coming soon! |
| Allocation awarded by University of Illinois or the ACCESS program | |||
HPC | Radiant is a new private cloud computing service operated by NCSA for the benefit of NCSA and UI faculty and staff. Customers can purchase VM's, computing time in cores, storage of various types and public IP's for use with their VM's. |
| Cost varies by the Radiant resource requested - see the Radiant wiki page for more details | ||||
AI-HPC | HOLL-I (Highly Optimized Logical Learning instrument) This a batch computing cluster which provides access to a Cerebras CS-2 Wafer Scale Engine for high performance Machine Learning work. It will have local home storage in addition to access to the Taiga center-wide storage system. | Extreme Scale Machine Learning with select Tensorflow and Pytorch models |
| Costs listed in | |||
HPC | NCSA has purchased 20 nodes that affiliates may request access to: https://campuscluster.illinois.edu/new_forms/user_form.php Alternatively, individuals, groups, and campus units can invest in compute and storage resources on the cluster or purchase compute time on demand or storage space by the terabyte/month. |
| |||||
HTC | The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services, and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs. | The HTC service is not intended to run MPI jobs |
| Allocation awarded by University of Illinois Urbana campus | |||
HIPAA HPC | HIPAA secure computation environment | Projects working with HIPAA, CUI, and other protected or sensitive data.Interactive Compute Nodes: |
Storage:
| Cost to purchase nodes and storage | Nightingale Documentation | help@ncsa.illinois.edu | |
Research IT Software Collaborative Services | Support | Getting Hands-On Programming Support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways, and workflows | Coming soon! | N/A | Allocation awarded by campus Research IT | Research Software Collaborative Services | |
Archive Storage | Granite is NCSA's Tape Archive system, closely integrated with Taiga, to provide users with a place to store longer term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3. Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure. |
|
| Internal Rate: $16/TB/Year External Rate: Contact Support | |||
Storage | Taiga is NCSA's Global File System that is able to integrate with all non-HIPAA environments in the National Petascale Computation Facility. Built with SSUs (Scaleable Storage Units) spec'd by NCSA engineers with DDN, it provides a center-wide, single-namespace file system that is available to use across multiple platforms at NCSA. This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud, and container resources. Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long term, cold storage. |
|
| Internal Rate: $32/TB/Year External Rate: Contact Support | |||
HAL | |||||||
ISL | |||||||
DCCR | |||||||
Open Storage Network (OSN) | |||||||
VLAD | |||||||
Kingfisher |
...