Quick Overview

The way in which ROGER has been configured essentially provides users with 3 different subsystems from which they can utilize based on their computing needs, including:

  • Standard high-performance computing (HPC) system (based on Unix) for batch computing
  • Hadoop system
  • OpenStack system for a cloud environment

Below are explanations of each of these subsystems, with direct links to each subsystem's user guide for ROGER.


HPC Configuration

Click Here for HPC User Guide

Generally speaking, all supercomputers are high-performance computing (HPC) systems. By utilizing multiple nodes (or individual computers), the entire supercomputer cluster can operate on one very large computing task all at the same time, termed . This is the traditional computing paradigm for supercomputing.

Hadoop Configuration

Click Here for Hadoop User Guide

In the era of Big Data there has been an emergence of new technologies which support and optimize the computational needs of the researcher. Hadoop is one such tool developed, which incorporates two key components: HDFS (Hadoop file system) and MapReduce. The file system is unique in that it breaks the data up into blocks. MapReduce then brings the processing software to these data blocks, instead of the other way around.

Users may request access to the Hadoop services on ROGER from the CyberGIS staff by sending an email request to help+roger@ncsa.illinois.edu.

OpenStack Configuration

Click Here for OpenStack User Guide

OpenStack is an open-source cloud computing framework designed for controlling large pools of computing, storage and network resources. With the OpenStack tools, users create new instances of Virtual Machines on ROGER's hardware and have full control over its resources. OpenStack follows a modular architecture, which means it is composed of many different software packages (each providing their own unique service), that when combined, create the entire OpenStack framework. Using OpenStack with ROGER can provide an effective tool for creating, managing, teaching and sharing data and results, especially with those without access to supercomputing hardware.