Date: Fri, 29 Mar 2024 01:39:10 -0500 (CDT) Message-ID: <291046446.1514.1711694350073@wiki.ncsa.illinois.edu> Subject: Exported From Confluence MIME-Version: 1.0 Content-Type: multipart/related; boundary="----=_Part_1513_92469551.1711694350071" ------=_Part_1513_92469551.1711694350071 Content-Type: text/html; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Content-Location: file:///C:/exported.html
The archive is essentially made up of four main parts, the Granite serve= rs, the disk cache, the Library, and its export nodes. The Granite servers = connect to the library via multiple 2Gb FC connections, the cache via 12G S= AS, and the NFS Export nodes via 2x100 GbE.
File System Block Size: 1 MB
For a balance of throughput performance and file space efficiency, a blo= ck size of 1MB has been chosen for the ScoutFS file system that ac= ts as the Granite front end disk cache.
The ScoutAM software allows for section sizing media into chunks to miti= gate writing small files to tape. We currently have a section size of 10GB.=
There are currently three different methods that will be implemented to = store and retrieve data, Globus, SCP, and a DR Solution the SET team provid= es.
Globus is a third party service NCSA uses to leverages across a number o= f it's storage systems.
Granite is accessible via Globus at the endpoint name "NCSA Gra= nite" and the endpoint is open to the public internet for transfers from an= ywhere with another Globus endpoint. Authentication to the endpoint i= s handled by NCSA's CILogon service and requires two-factor via Duo. = For shared collections, or other questions submit a ticket to help+globus@ncsa.illinois.edu. Information about Globus can be = found at their site https://www.globus.org
Users will be able to target granite-scp.ncsa.illinois.edu from any publ= ic or private-routed resource within NPCF to send data to or retrieve data = from tape. Any retrieval or storage from non-NCSA vetted resources wi= ll require 2FA authentication.
DR-Backup:
NCSA's Storage Team has developed and maintains DR software to facilitat= e the backup of large chunks of POSIX file systems, especially those that s= ee lower levels of churn (eg. not /scratch). Projects that leverage a= nd pay for these DR services are able to use Granite as a backend target fo= r these backups and the software has been configured to optimize storage on= Granite.
Ratio: 1TB Quota to 10,000 iNodes
An inode is a record that describes a file, directory, or link. To= ensure that good streaming performance of the tape archive subsystem and i= ncreased performance for file recovery this quota is enforced for all proje= cts.
Quota is currently controlled physically. ScoutAM and the files= ystem it's based off (ScoutFS) currently only allows to assign a pool of ta= pes to users/groups. Each tape is 4TB so quota's will be set in 4TB chunks = up to and slightly over (1-3GB) of your allocated quota. We intend for this= to be controlled more precisely at the filesystem level in the near future= .