...
Once your user account has been created, you can submit proposals for open Illinois allocation requests by visiting the NCSA XRAS Submit portal.
The table below contains upcoming/recent periodic allocation requests periods for Illinois usersresources available to Illinois users through the NCSA XRAS portal. For a complete list of available resources, view the NCSA Resources section below.
Resource |
---|
Open Request Period | Access | |
---|---|---|
Spring & Fall Semesters | Allocation awarded by NCSA to University of Illinois Urbana campus researchers | |
Open Continuously | Resources available based on service fee. |
...
See the Radiant wiki page for more details. | ||
HOLL-I | Open Continuously | Resources available based on service fee. See the HOLL-I wiki page for more details. |
Nightingale | Open Continuously | Resources available based on service fee. Coming soon to NCSA XRAS! To inquire about access, please contact Maria Jaromin (mjaromin@illinois.edu) |
...
ACCESS Allocation Requests
To get started with XSEDE ACCESS allocations, see the Getting Get Started with XSEDE ACCESS page on the XSEDE User PortalACCESS website.
XSEDE ACCESS resources are allocated through Research and Education allocations on a quarterly allocation schedule:
...
Note that new users are strongly encouraged to seek a Startup Allocation before requesting a Research Allocation.
...
on an ongoing basis through Explore, Discover, and Accelerate allocations, and periodically through Maximize allocations.
...
NCSA Resources
NCSA offers access to a variety of resources that can be requested through the XSEDE ACCESS program, or, by University of Illinois users through our Illinois allocations.
Name/URL | Type | Description | Primary Use Cases | Hardware/Storage |
---|
Access | User Documentation | User Support | |
---|---|---|---|
HPC | A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio. | Coming soon! |
|
Coming soon!
- 124 CPU nodes
- 100 quad A100 GPU nodes
- 100 quad A40 GPU nodes
- Five eight-way A100 GPU nodes
- One MI100 GPU node
- Eight utility nodes will provide login access, data transfer capability and other services
- 100 Gb/s HPE SlingShot network fabric
- 7 PB of disk-based Lustre storage
- 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021
Biannual Delta Illinois Allocation Period
Allocation awarded by NCSA - see Illinois Allocations section below
Coming soon!
Coming soon!
Allocation awarded by |
HPC
A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio.
University of Illinois or the ACCESS program | ||
HPC | Radiant is a new private cloud computing service operated by NCSA for the benefit of NCSA and UI faculty and staff. Customers can purchase VM's, computing time in cores, storage of various types and public IP's for use with their VM's. |
|
Cost varies by the Radiant resource requested - see the Radiant wiki page for more details | ||||
AI-HPC | HOLL-I (Highly Optimized Logical Learning instrument) This a batch computing cluster which provides access to a Cerebras CS-2 Wafer Scale Engine for high performance Machine Learning work. It will have local home storage in addition to access to the Taiga center-wide storage system. | Extreme Scale Machine Learning with select Tensorflow and Pytorch models |
|
Costs listed in | ||||
HPC | NCSA has purchased 20 nodes that affiliates may request access to: https://campuscluster.illinois.edu/new_forms/user_form.php Alternatively, individuals, groups, and campus units can invest in compute and storage resources on the cluster or purchase compute time on demand or storage space by the terabyte/month. | 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), Tesla K40M GPU 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), No GPU 4 nodes with: 256GB memory, InfiniBand interconnect, 24 cores (E2690V3 CPU), No GPU |
HTC | The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services, and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs. | The HTC service is not intended to run MPI jobs | 300 compute nodes with 12-core Intel Xeon X5650 @2.67GHz and 24 GB RAM. Of those, ~2 have 48 GB RAM and ~1 have 96 GB RAM |
Allocation awarded by University of Illinois Urbana campus | ||
HIPAA HPC | HIPAA secure computation environment |
HPC
HPC
HPC
The XSEDE ecosystem encompasses a broad portfolio of resources operated by members of the XSEDE Service Provider Forum. These resources include multi-core and many-core high-performance computing (HPC) systems, distributed high-throughput computing (HTC) environments, visualization and data analysis systems, large-memory systems, data storage, and cloud systems. Some resources provide unique services for Science Gateways. Some of these resources are made available to the user community through a central XSEDE-managed allocations process, while many other resources operated by Forum members are linked to other parts of the ecosystem.
Projects working with HIPAA, CUI, and other protected or sensitive data. | Interactive Compute Nodes: 4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM 16 dual 64-core AMD systems with 1 TB of RAM 880 TB of high-speed parallel LUSTRE-based storage | Cost to purchase nodes and storage | Nightingale Documentation | help@ncsa.illinois.edu |
Allocation awarded by XSEDE
Allocation awarded by XSEDE
help@xsede.orgResearch IT Software Collaborative Services | Support | Getting Hands-On Programming Support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways, and workflows | Coming soon! | N/A |
Allocation awarded by campus Research IT | Research Software Collaborative Services | |||
Archive Storage | Granite is NCSA's Tape Archive system, closely integrated with Taiga, to provide users with a place to store longer term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3. Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure. |
|
|
Internal Rate: $16/TB/Year External Rate: Contact Support | ||||
Storage | Taiga is NCSA's Global File System that is able to integrate with all non-HIPAA environments in the National Petascale Computation Facility. Built with SSUs (Scaleable Storage Units) spec'd by NCSA engineers with DDN, it provides a center-wide, single-namespace file system that is available to use across multiple platforms at NCSA. This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud, and container resources. Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long term, cold storage. |
|
|
Internal Rate: $32/TB/Year External Rate: Contact Support | |||||||
HAL | |||||||
ISL | |||||||
DCCR | |||||||
Open Storage Network (OSN) | |||||||
VLAD | |||||||
Kingfisher |
...
Please contact help@ncsa.illinois.edu if you have any questions or need help getting started with NCSA resources.
...