Page tree
Skip to end of metadata
Go to start of metadata


Granite is made up of a single 19-frame Spectra T-Finity library; this is currently capable of holding ~32PB of replicated storage.  Currently 30 tape drives are being leveraged to read/write data from/to the archive.  There are four nodes that form Granite's primary infrastructure.  They each connect to 8 tape drives (though two sacrifice one tape drive to host the quip interface for library control).  These machines also connect to two NetApp E2600 couplets, one that houses archive metadata, and the other that holds data in the form of a disk cache.  Each Granite node is connected to each NetApp twice (once to each of the unit's controllers).  Each Granite node is also connected at 2 x 100Gb to the SET aggregation switch that in turn uplinks at 2 x 100GbE to the NPCF core network.  

Data Mover Nodes

Granite shares its data movers with Taiga. This allows for quicker and more direct Globus transfers between Taiga and Granite.  The tape archive is mounted via NFS on to the Globus mover nodes directly so they have direct access.  

  • No labels