You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 46 Next »

Overview

This document looks at some possibilities of extending version 1.0 of the KNSG Architecture to include semantic technologies that will improve the framework and add value to the user experience. KNSG is a non-domain specific application framework that is built upon Bard, PTPFlow, and MyProxy for setting up, launching and managing HPC application workflows through an easy to use set of user interfaces. The framework has simple facilities for working with data including import/export, annotation and tagging. The user can create scenarios, add data to them and then launch workflows to HPC machines using PTPFlow. There is also a facility for retrieving result data from completed jobs so the user can continue to work with it and if possible, visualize it. The intent of this document is to lay the foundation of how the core components and views will be enhanced in version 2.0 by adding in the semantic capabilities provided by Tupelo and replacing the current frameworks repository system with a Tupelo context and Tupelo beans. It will also give users information on how to extend the framework for their domain specific application.

Core Application Management

The central management piece for each KNSG application is KNSGFrame, an extension of BardFrame that registers tupelo utility methods for CETBeans used by the KNSG framework. In this document, we will will use the name BardFrame since our extension only overrides the method for registering beans, the rest is the same. BardFrame provides an interface for working with the Tupelo semantic content repository and is responsible for managing contexts, bean sessions, data, etc. The use of beans will be a core concept for persisting information in the KNSG framework so all beans will need to descend from CETBean. Because every application will have its own bean requirements, each KNSG application should have its own instance of BardFrame to handle this as well as an ontology to define domain specific concepts. All application bean types should register with BardFrame and the IBardFrameService should provide the correct instance of BardFrame at runtime.

Tupelo Beans


These are the bean classes that will be required for the KNSG framework. Where possible, the core CETBeans will be used to minimize the work required and maximize compatibility across projects. Some beans will be marked optional if they are part of PTPFlow and it is uncertain that they will be managed by Tupelo or continue to be managed by PTPFlow's current repository.

Scenario Bean

A scenario bean will be used to organize things such as user data and workflows specific to a scenario (or project). This will include datasets (input and output), workflows, and possibly the RMI service for launching jobs. A snippet of what the scenario bean might look like is below:

ScenarioBean extends CETBean implements Serializable, CETBean.TitledBean
private String title;  // scenario title
private String description;  // scenario description
private Date date = new Date();  // date scenario created
private PersonBean creator;  // scenario creator
private Set<DatasetBean> dataSets;  // datasets associated with scenario
private List<WorkflowBean> workflows;  // workflows associated with this scenario, if possible, this needs to be able to wrap ptpflow workflow xml files, or we need our own bean type

This scenario bean will evolve as the application framework is built and more final documentation will be put here as the design matures. The main parts of this bean are: DatasetBean's will be used to manage all of the input/output datasets and the WorkflowBean List will contain the workflows associated with this scenario. A user might extend the ScenarioBean if their application has other things that logically belong to their scenarios, but this is unlikely. Most changes will happen at the metadata level (e.g. this dataset is a mesh, a result, etc).

Dataset Bean

This section is intended to talk about the types of concepts that the Ontology needs to capture. We will break this into two parts: general framework concepts (e.g. result) and eAIRS specific (e.g. mesh). We don't anticipate any changes to the DatasetBean class that is provided as part edu.uiuc.ncsa.cet.bean plug-in.

WorkflowBean & WorkflowStepBean

Below you will find an example of a PTPFlow workflow.xml file. This file cannot be altered since it is understood by PTPFlow and outlines the steps in the workflow including which resource to run on, executables that will be launched, input files to use, etc. Ideally, this file would be wrapped into the current WorkflowBean and/or WorkflowStepBean in the edu.uiuc.ncsa.cet.bean plug-in. If this is not possible, the KNSG framework will need its own workflow bean.

<workflow-builder name="eAIRS-Single" experimentId="singleCFDWorkflow" eventLevel="DEBUG">
  <!-- <global-resource>grid-abe.ncsa.teragrid.org</global-resource> -->
  <global-resource></global-resource>
  <scheduling>
    <profile name="batch">
      <property name="submissionType">
        <value>batch</value>
      </property>
    </profile>
  </scheduling>
  <execution>
     <profile name="mesh0">
     	 <property name="RESULT_LOC">
     	 	<value>some-file-uri</value>
     	 </property>
     	 <property name="executable">
     	 	<value>some-file-uri</value>
     	 </property>
         <property name="meshType">
           <value>some-file-uri</value>
         </property>
         <property name="inputParam">
           <value>some-file-uri</value>
         </property>
     </profile>
  </execution>
  <graph>
    <execute name="compute0">
      <scheduler-constraints>batch</scheduler-constraints>
      <execute-profiles>mesh0</execute-profiles>
      <payload>2DComp</payload>
    </execute>
  </graph>
  <scripts>
    <payload name="2DComp" type="elf">
      <elf>
        <serial-scripts>
          <ogrescript>
            <echo message="Result location = file:${RESULT_LOC}/${service.job.name} result directory is file:${runtime.dir}/result, copy target is file:${RESULT_LOC}/${service.job.name}"/>
            <simple-process execution-dir="${runtime.dir}" out-file="cfd.out" >
              <command-line>${executable} -mesh ${meshType} -param ${inputParam}</command-line>
             <!-- <command-line>${runtime.dir}/2D_Comp-2.0 -mesh ${meshType} -param ${inputParam}</command-line> -->
            </simple-process>
            <mkdir>
            	<uri>file:${RESULT_LOC}/${service.job.name}</uri>
            </mkdir>
            <copy sourceDir="file:${runtime.dir}/result" target="file:${RESULT_LOC}/${service.job.name}"/>
          </ogrescript>
        </serial-scripts>
      </elf>
    </payload>
  </scripts>
</workflow-builder>

RMIService Info Bean (optional)

The information about each service installation will be stored in an RMIServiceBean and will be used to launch and start the service. All of this information is currently used in PTPFlow and is stored in xml files. Bringing in tupelo to the service stack will allow us to store this information in tupelo.

RMIServiceBean extends CETBean implements Serializable
// Service Info
private String name;
private String platform;
private String deployUsingURI;  // e.g. file:/
private String launchUsingURI;
private String installLocation;  // e.g. /home/user_home/ptpflow
private String rmiContactURI;
private int rmiPortLowerBound;
private int rmiPortUpperBound;
private int gridftpPortLowerBound;
private int gridftpPortUpperBound;
private Date installedDate;
private boolean running;
private Set<HostResourceBean> knownHosts;  // all of the known hosts associated with this service

Known Host Bean (optional)

Below is the bean structure that is anticipated:

A HostResourceBean defines the hpc host and its properties.

HostResourceBean extends CETBean implements Serializable
private String osName;  // host os name
private String osVersion; // host os version
private String architecture; // host architecture
private String id; // host id
private Set<PropertyBean> envProperties;  // environment properties on host
private Set<NodeBean> nodes;  // properties of each node
private Set<UserPropertyBean> users;  // user properties on the host - userHome, userNameOnHost, userName

A NodeBean defines an HPC nodes properties such as the protocols used and nodeId.

NodeBean extends CETBean implements Serializable
private String nodeId;  // id of the node, e.g. grid-abe.ncsa.teragrid.org
private List<FileProtocolBean> fileProtocols;
private List<BatchProtocolBean> batchProtocols;
private List<InteractiveProtocolBean> interactiveProtocols;

A UserPropertyBean defines the users properties on the host

UserPropertyBean extends CETBean implements Serializable
private String userHome;
private String userName;
private String userNameOnHost;

Metadata Requirements

General Framework Metadata

What we need to capture:

  • Is this dataset a result or output dataset?
  • Is this dataset an input dataset?

eAIRS Metadata

What we need to capture:

  • Is this dataset an eAirs mesh?
  • Is this dataset an eAirs input file
  • Result files: coefhist.rlt, error.rlt, result.rlt, time.rlt, cp.rlt, force_com.rlt, result.vtk. We should capture enough information to know what each of these files represent as far as outputs.

Mime types of the files that are outputs from the eAIRS workflow

  • .rlt
  • .vtk

Views

Scenarios View

A primary view provided by the KNSG framework will be the ScenariosView that displays user scenario(s) and all sub-parts (most likely in some kinda of Tree viewer). A scenario is similar to the concept of a project and is simply a way of organizing things that belong together. The scenario is responsible for managing all of the pieces that it contains including input datasets, output datasets and workflows. Users will be able to launch jobs on available HPC machines through an RMI Service (provided by PTPFlow) that use the inputs in their scenario and when a project completes, the outputs should be added back to that scenario, possibly through a thread that is polling the Tupelo server for new data). A user can have multiple scenarios open at once, close scenarios, or even delete scenarios from their scenario view (deleted from the view, but still in the repository) so we'll need to manage which scenarios are in a session and possibly what is their current state (open/closed). It is anticipated that new applications might extend this view to organize their view differently for their specific domain (e.g. use different icons, possibly organize data into different categories, etc).

RMI Service Registry View

This view shows all of the machines defined as available to the user for installing the RMI service and PTPFlow plugins required to run HPC jobs and return status information to the client. Below you will see a partial specification for what an RMI Service stored as a tupelo bean might look like. This is not a requirement for version 2.0.

Datasets View

This view will display the datasets that are stored in the tupelo context that the system is connected to. Users should be able to import/export datasets from this view, tag datasets, etc.
Rather than a single repository view, this will be multiple views that are configured to show a particular type of bean(s) coming from a content provider. The content provider would get the data required from the configured tupelo context(s). For example, we will need a "Dataset Repository View" that shows all datasets (e.g. input/output datasets) and a way to manipulate them (e.g. add tags, annotations, etc), "Workflow Repository View" that shows all imported workflows, "Scenario Repository View" that shows all saved scenarios, "Service Repository View" that shows defined RMI service endpoints for launching jobs, and a "Known Hosts View" for showing known hosts that can accept jobs. This seems like too much disparate information to display in a single view. All Repository views will descend from BardFrameView since the BardFrame will be required to get the data required for each view.

Functional Requirements

  1. Import datasets that will be used as input to HPC workflows such as Mesh files, input files (e.g. mach number, poisson ratio, etc)
  2. Store output datasets from workflow runs, some workflows will be parameterized and have multiple outputs
  3. Export datasets
  4. Dataset tagging, Annotation, etc
  5. Other functionality?

Workflow view

This view will display the workflows that have been run and the parameters they were ran with (possibly for re-running a workflow).

Functional Requirements

  1. Store workflow xml files (Ogrescript) with the parameters used
  2. Provide delete, tag, annotate, etc
  3. Other functionality?

Overview view

This view will provide general information about what is selected (e.g. a scenario, a dataset, etc). It will display things like who the creator was, date created, etc. If possible, it will provide a preview of what is selected (e.g. a dataset).

Tag view

View for working with tags. It will display all tags associated with the current selection and allow users to manage the tags.

Known Host View

This view contains a list of defined HPC hosts that the user can launch jobs on. This view will provide the user with the ability to change/view/add properties such as environment settings, user information for the host (username, user home, etc), host operating system, node properties, new hosts, etc. These changes should be propogated to the defined RMI services so they can be used immediately.

Analysis Framework


The analysis framework will allow users to register HPC workflows, modify the workflow inputs through a graphical user interface, and execute HPC jobs when all inputs are satisfied.

  • No labels