You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 29 Next »

Table of Contents


Open Illinois Allocation Requests

To get started with Illinois allocations, request an NCSA Kerberos account at this link.  Account creation may take up to 24 hours once requested.

Once your user account has been created, you can submit proposals for open allocation requests by visiting the NCSA XRAS Submit portal.


Open Resource AllocationsOpen Request PeriodAccess

Delta Illinois

June 1 to August 15

Allocation awarded by NCSA to University of Illinois Urbana campus researchers

 

XSEDE Allocations

To get started with XSEDE allocations, see the Getting Started with XSEDE page on the XSEDE User Portal.

XSEDE resources are allocated through Research and Education allocations on a quarterly allocation schedule:

Submission PeriodMeeting DateUsers NotifiedAllocation Begin Date
Dec 15 thru Jan 15Early MarchMarch 15April 1
Mar 15 thru Apr 15Early JuneJune 15Jul 1
Jun 15 thru Jul 15Late August/Early SeptemberSeptember 15Oct 1
Sep 15 thru Oct 15Early DecemberDecember 15Jan 1

Note that new users are strongly encouraged to seek a Startup Allocation before requesting a Research Allocation.

You can also obtain access to XSEDE resources through your Campus Champion.  You can find out who your local Campus Champion is at this link.

Resources

NCSA offers access to a variety of resources that can be requested through the XSEDE program, or, by University of Illinois users through our Illinois allocations.


Name/URLTypeDescriptionPrimary Use CasesHardware/StorageAllocation PeriodAccessUser DocumentationUser Support




Delta XSEDE




HPC



A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio.




Coming soon!

  • 124 CPU nodes
  • 100 quad A100 GPU nodes
  • 100 quad A40 GPU nodes
  • Five eight-way A100 GPU nodes
  • One MI100 GPU node
  • Eight utility nodes will provide login access, data transfer capability and other services
  • 100 Gb/s HPE SlingShot network fabric
  • 7 PB of disk-based Lustre storage
  • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021



XSEDE Quarterly Allocation




Allocation awarded by XSEDE




Getting Started on XSEDE




help@xsede.org




Delta Illinois 




HPC



A computing and data resource that balances cutting-edge graphics processor and CPU architectures that will shape the future of advanced research computing. Made possible by the National Science Foundation, Delta will be the most performant GPU computing resource in NSF's portfolio.




Coming soon!

  • 124 CPU nodes
  • 100 quad A100 GPU nodes
  • 100 quad A40 GPU nodes
  • Five eight-way A100 GPU nodes
  • One MI100 GPU node
  • Eight utility nodes will provide login access, data transfer capability and other services
  • 100 Gb/s HPE SlingShot network fabric
  • 7 PB of disk-based Lustre storage
  • 3 PB of flash based storage for data intensive workloads to be deployed in the fall of 2021



Biannual Delta Illinois Allocation Period




Allocation awarded by NCSA - see Illinois Allocations section below




Coming soon!




Coming soon!



Radiant



HPC


Radiant is a new private cloud computing service operated by NCSA for the benefit of NCSA and UI faculty and staff.  Customers can purchase VM's, computing time in cores, storage of various types and public IP's for use with their VM's.



Radiant Use Cases

  • 140 nodes
  • 3360 cores
  • 35TB Memory
  • 25GbE/100GbE backing network
  • 185TB Usable flash capacity
  • access to NCSA’s 10PB+ (and growing) center-wide storage infrastructure/archive



Open Continuously



Cost varies by the Radiant resource requested - see the Radiant wiki page for more details



Radiant



Radiant


NCSA Illinois Campus Cluster Investment



HPC


NCSA has purchased 20 nodes that affiliates may request access to: https://campuscluster.illinois.edu/new_forms/user_form.php

Alternatively, individuals, groups, and campus units can invest in compute and storage resources on the cluster or purchase compute time on demand or storage space by the terabyte/month.



ICCP Use Cases

8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), Tesla K40M GPU

8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), No GPU

4 nodes with: 256GB memory, InfiniBand interconnect, 24 cores (E2690V3 CPU), No GPU



Open Continuously



Cost to purchase nodes, storage, or usage on-demand



Illinois Campus Cluster Program Resources



help@campuscluster.illinois.edu



Illinois HTC Program





HTC


The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services, and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs.


The HTC service is not intended to run MPI jobs


300 compute nodes with 12-core Intel Xeon X5650 @2.67GHz  and 24 GB RAM. Of those, ~2 have 48 GB RAM and ~1 have 96 GB RAM


Open Continuously


Allocation awarded by University of Illinois Urbana campus


HTC User Documentation


htc@lists.illinois.edu

Nightingale

HIPAA HPC

HIPAA secure computation environment



Open ContinuouslyCost to purchase nodes and storage

XSEDE Startup Allocation


HPC

Startup allocations, along with Trial allocations, are one of the fastest ways to gain access to and start using XSEDE-allocated resources. We recommend that all new XSEDE users begin by requesting Startup allocation.


XSEDE Use Cases

XSEDE ecosystemOpen ContinuouslyAllocation awarded to new users by XSEDEGetting Started on XSEDEhelp@xsede.org
Campus Champion Allocation 

HPC

Your local Campus Champion can share their XSEDE allocation, find out who your local Campus Champion is

XSEDE Use Cases

XSEDE ecosystemOpen ContinuouslyAllocation awarded by your Campus ChampionGetting Started on XSEDEhelp@xsede.org




XSEDE Research Allocation





HPC

The XSEDE ecosystem encompasses a broad portfolio of resources operated by members of the XSEDE Service Provider Forum. These resources include multi-core and many-core high-performance computing (HPC) systems, distributed high-throughput computing (HTC) environments, visualization and data analysis systems, large-memory systems, data storage, and cloud systems. Some resources provide unique services for Science Gateways. Some of these resources are made available to the user community through a central XSEDE-managed allocations process, while many other resources operated by Forum members are linked to other parts of the ecosystem.





XSEDE Use Cases




XSEDE ecosystem




XSEDE Quarterly Allocation





Allocation awarded by XSEDE




Getting Started on XSEDE




help@xsede.org


XSEDE Education Allocation

HPCEducation allocations are for academic courses or training activities that have specific begin and end dates. Instructors may request a single resource or a combination of resources. Education requests have the same allocation size limits as Startup requests; per resource limits are in the Startup Limits table. As with Startup requests, Educational requests are limited to no more than three separate computational resources, unless the abstract explicitly justifies the need for each resource to the reviewers' satisfaction.



XSEDE Use Cases



XSEDE ecosystem



Open Continuously



Allocation awarded by XSEDE



Getting Started on XSEDE



help@xsede.org

Research IT Software Collaborative ServicesSupportGetting Hands-On Programming Support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways, and workflows


Coming soon!


N/A


Open Continuously


Allocation awarded by campus Research IT

Research Software Collaborative Services


research-it@illinois.edu



Granite






Archive Storage


Granite is NCSA's Tape Archive system, closely integrated with Taiga, to provide users with a place to store longer term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3.  Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure.  

  • Storage of infrequently accessed data
  • Disaster Recovery
  • Archive Datasets
  • 19 Frame Spectra TFinity Library
  • 40PB of replicated capacity on TS1140 (JAG 7) media
  • Managed by Versity's ScoutFS/ScoutAM products.



Open Continuously



Internal Rate: $16/TB/Year

External Rate: Contact Support



Taiga & Granite Documentation



set@ncsa.illinois.edu




Taiga




Storage

Taiga is NCSA's Global File System that is able to integrate with all non-HIPAA environments in the National Petascale Computation Facility.  Built with SSUs (Scaleable Storage Units) spec'd by NCSA engineers with DDN, it provides a center-wide, single-namespace file system that is available to use across multiple platforms at NCSA.  This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud, and container resources.  Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long term, cold storage.


  • Active Research and Project Data
  • Visualization Data


  • 10PB of hybrid NVME/HDD storage based on two Taiga SSU's
  • Backed by HDR Infiniband
  • Running DDN's Lustre ExaScaler appliance. 



Open Continuously



Internal Rate: $32/TB/Year

External Rate: Contact Support



Taiga & Granite Documentation



set@ncsa.illinois.edu

HAL







ISL







SPIN







DCCR







Open Storage Network (OSN)









VLAD







Kingfisher








Please contact help@ncsa.illinois.edu if you have any questions or need help getting started with NCSA resources.



  • No labels