Child pages
  • HPC: A Quick Start Guide
Skip to end of metadata
Go to start of metadata
This page contains macros or features from a plugin which requires a valid license.

You will need to contact your administrator.

10 Steps To A Quick Analysis

Included on this page is a brief step-by-step guide for logging in, transferring files, compiling code, and submitting a batch job via HPC subsystem.

For a more complete set of instructions and explanations, please refer to: Detailed Subsystem User Guides .

How can I get an account?

Login information, such as username and password for individual users are supplied to the research group's coordinator, which have been provided by the CyberGIS Center upon allocation.

Don't have an allocation with ROGER yet? Please refer to How to Request an Account for more information regarding requests.  

How do I login to ROGER?

First step is to open your command console and type in the following CONNECTION STRING, hit enter.



It will then prompt you to supply a unique password. Type in the password associated with the username you entered, hit enter.

Upon successful login you should see a welcome message along with some basic information about the ROGER cluster. You are now using a bash shell on the LOGIN NODEof the cluster.


Windows Users!

Login is unavailable via the windows command prompt. We suggest installing MobaXterm for securely logging in. Free download is available online».

Understanding the file-system

Now that you have connected to ROGER and are on the LOGIN NODE you have access to 3 different directories, or 'spaces'.

The spaces are important because each space is restricted to a specific size.

  1. Home Directory - you can think of this as the top drawer in your office desk.
  • Only you and the project coordinator have access to this directory.

Limit: 10 GB

2. Scratch Directory - this is where the heavy computing should be done.

3. Project Directory - you can think of this as the filing room at your work.

  • The entire group has access to this space

Soft limit: 10 TB


DO NOT use the Home Directory for computing! The size of this space is not equipped to handle any sort of computation.

Customizing your environment in the shell

The Bash shell is the default shell environment users are placed in when initially logging in. The shell can be configured in a way that optimizes and supports your unique computational needs. In order to do so, we may use the command module.

To see what modules are currently available to load into your shell:

module avail

To see what modules are currently loaded into your environment:

module list 

Housekeeping Tips

  • It is critical that you do not run any jobs on the login node. Please ensure that you are allocating jobs to the appropriate compute nodes.
  • Be aware of how much resources your job is going to require, trying to avoid overloading the system. For example, knowing how much memory is on the compute nodes and how much memory your job is going to roughly require.
  • Utilize the file system appropriately. As it is designed, large files are best stored in the project and/or scratch space.
  • Do not share your personal login information with others.
  • Remember to back-up your data often!

Copying files onto ROGER

To start working with your own data the files must be located on one of ROGER's nodes. In order to transfer these files from your personal desktop to the ROGER cluster, a management tool must be used that will copy files from one to the other based on specific parameters given by the user.

Linux & Mac Users:

Use the scp tool as follows:

To copy a single file from your desktop to ROGER:

$ scp myfile

To copy a single file from ROGER to your desktop:

$ scp 
Windows Users:

MobaXterm supports the transferring of files (as well as other SSH clients - list here) for Windows users with the same commands scp and sftp.

Basics of nodes & the queue system

Since computing is not done on the login node, users must send their jobs off to the compute nodes which well then take care of the heavy lifting. To do so, we use a QUEUE SYSTEM along with Torque, which is a resource manager handling the job submissions. Torque responds to PBS commands, which are included in what we call a job script.

ROGER has two queues in place: batch and interactive. The batch queue is most common as the interactive queue is only available during weekday business hours.

Writing the job script

Before sending your compiled code to Torque a script must be created, your JOB SCRIPT. Begin by creating a new file in your text editor (here we will name it, however you can give it whatever name you like), typing the commands below.

On ROGER, you can use a text editor like nano (recommended for new users) or vi (for experienced users that know vi commands). You can create and start editing the job script with nano

#PBS -N Job_Name
#PBS -m bea -M
#PBS -e localhost:/projects/name/$PBS_JOBNAME.err
#PBS -o localhost:/projects/name/$PBS_JOBNAME.log
#PBS -l nodes=2:ppn=20
#PBS -l mem=200gb
#PBS -l walltime=12:00:00
module load mpich python/2.7.10


Specifies this is a bash script
The name of your job
Email you before and after, and if execution is aborted
The location to write the error log when the job is done
The location to write the output log when the job is done
Computing nodes to use : CPUs per node (always 20)
The amount of memory to request
Requested run-time for your job, where walltime=HH:MM:SS
Your job will be terminated if it exceeds this time!
Prints the current date and time in the output log
Loads environment modules to enable software and libraries
Begin executing your job
This example is a Python script, but could be anything.

Submit batch script

To submit your JOB SCRIPT the qsub command is used along with the file name of your script.


$ qsub

This is obviously a very simple example where we are using only one node. However, for more complex jobs it will be likely that multiple nodes will need to be used. In this case users must specify how many nodes they will need along with the amount of cores. For more details/advanced examples please see the Queue System & Running Jobs.

On This Page

Browse Content


  • No labels