Last update: November 4, 2022

Service Description 

The National Center for Supercomputing Applications (NCSA) has been working for years on a systems setup built for high-performance computing in the healthcare space.  Nightingale is the campus culmination of that effort with a small cluster built in the NCSA Advanced Computing Health Enclave (ACHE).  This system is meant to be a place for researchers to interact with protected data and do analysis.  Nightingale is based on many other successful shared clusters on campus but is built in a much more secure environment.  To protect the system and the data many controls were put in place and a yearly external audit of the security is in place.  What nightingale offers to researchers and research partners is a safe place to store PHI information and do computing work.  The current system comprises standard batch computing options and interactive compute nodes.  Most of the systems contain GPUs primed for either double precision work (A100) or single precision work (A40).  These systems will be made available to users as dedicated servers or shared environments based on the needs of each research project.  In addition, Nightingale will offer database services to support long-term data storage and management with the ability to share and access relational databases on the system.  With 880 TB of high-speed parallel LUSTRE-based storage, the system will also have plenty of space for researchers to work with their data and create and store results.

Nightingale Technical Specifications

Batch System:

16 dual 64-core AMD systems with 1 TB of RAM

2 dual-A100 compute nodes with 32-core AMDs and 512 GB of RAM


Interactive Nodes:

4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM

6 interactive nodes with 1 A100, dual 32-core AMDs with 256GB RAM

5 interactive nodes with 1 A40 with dual 32-core AMDs and 512GB RAM


Nightingale Service Options:

Batch Computing (HPC or single node jobs through a scheduler)

This is the model which will allow users to use standard compute units to get processor time over hours using a standard SLURM method.  This is meant to look like RCaas from the campus cluster but will have a slightly higher cost as it needs to cover more operations than the campus cluster.  Standard software and SLURM services will run on a set of nodes for their queues and will have secondary access to any purchased nodes via a short walltime queue.

Interactive Compute Node Usage (GPU and non-GPU)

This is a service in which users and groups of users will have access to an interactive node to work on their data.  This node will be shared with other users but can include GPU usage or CPU usage.  The share node service will allow users to use the interactive node, with limited or no guarantee of the speed of service.  For projects like python scripting or viewing of image results, this is a great method to work with the system.  The lower cost of this model is built on the assumption that you are using your share of the system.  Measurements will be taken and users creating a high load on the system will be required to upgrade to a dedicated node experience.  With the expectation that all users will share a node with at least 5 other people and use the value of a node over working hours, this works out to be a great price for small teams to work on HIPAA data.

Storage services

Nightingale will start with 880TB of high-performance parallel LUSTRE storage.  This system will be mounted to all Nightingale nodes and be available for both long-term storage and parallel HPC interaction.  Currently, there is no off-site or another backup of the storage system, but as a large parallel file system data is protected from most hardware failures.  Storage will be billed by the terabyte and will have an expectation of at least 6 months of usage based on user requests.


Database services

Nightingale will offer database storage and administration of data on an as-needed basis for research projects.  The database setup and ongoing management will have a cost to the research project.  In addition, some methods of data update and operation will have costs.  These costs will be based on the complexity of the data and the time required to manage and administer the system.  In addition, if data is being provided as a service to users those users may have a charge to help manage permissions and access controls on the data stored.  This is not expected to be a large fee but will be extended with individual consulting as needed.  Note that if data is provided as a service to users that data will need to be controlled by the standard security processes.

Charging Policy

  • Internal Users 

    • Batch Computing (HPC or single node jobs through a scheduler) - $0.02715 per core hour
    • Interactive Compute Node Usage (GPU and non-GPU) - $267.50 per node per month
    • Storage Services - $126.45 per TB per year, or $10.54 per month
    • Database Services - $968.50 per user per year, or $80.71 per month
  • External Users 

    • Batch Computing (HPC or single node jobs through a scheduler) - $0.043 per core hour
    • Interactive Compute Node Usage (GPU and non-GPU) - $424.26 per node per month
    • Storage Services - $200.56 per TB per year, or $16.71 per month
    • Database Services - $1,536.03 per user per year or $128 PER month
    • External user is defined as an organization or individual whose ultimate source of funds is outside of the U of I System. External users include students and any members of faculty or staff acting in a personal capacity, and the general public. Affiliated hospitals or other universities are considered external users unless the System has subcontracted with them as part of a grant or contract.

Contacts

For Technical Questions, please reach out to Douglas Fein (genius@illinois.edu

For Financial and Billing Concerns, please reach out to the NCSA Business Office (faaccount@ncsa.illinois.edu) or Jessica Leemon (lannon2@illinois.edu

For Proposal Inclusions, please reach out to the Proposal Development Staff (NCSA-proposals@lists.ncsa.illinois.edu

  • No labels