Trebuchet is a multi-scheme file-transfer API and client library written in Java.
Purpose
The principal aim of this library is to provide an abstraction layer over the standard file-related operations (such as directory listing, directory creation, file transfer and file deletion) which allows switching between protocols without alteration of either source code or scripting. The library is also designed to be extensible so that new protocol support can be added in a reasonably clean manner.
Currently, there are two ways in which Trebuchet can be utilized:
- As normal Java package imports (i.e., programmatic calls to the library by other Java code);
- Through the available Ogrescript tasks: see the Ogrescript Trebuchet plugin.
Certain capabilities, such as restarting operations and inspecting the binary "cache" files, are also available from the command-line. There are plans to provide sometime in the near future an Eclipse-based RCP Trebuchet client for easy management of file transfers across multiple hosts.
Features
The following are some of the more salient features offered by the Trebuchet library/tasks:
- Full support for UNIX-style operations (
ls
,touch
,mkdir
,cp
,mv
,rm
) locally and via the SSH/SCP protocols. - Support for all of these operations except
touch
via GRIDFTP and WEBDAV. - GSI/certificate-based authentication/authorization (SSH and GRIDFTP).
- Automatic one-hop handling of third-party transfers over mixed protocols (e.g., SCP on host A to GRIDFTP on host B).
- Two ways of achieving file transfer or deletion:
- By specifying exact locations/paths;
- By scanning or listing.
- Fully recursive pattern-based scanning (using the '*' and '**' wildcard characters; see UriPattern).
- All operations can be customized (using the available settings appropriate to the given protocol) via a general-purpose configuration object.
- All GRIDFTP options available in the
jglobus
library are exposed for configurability; in particular, optimization settings such as:- TCP buffer size;
- setting active mode on the target.
- Automated support for both LIST and MLST/MLSD (GRIDFTP); options for forcing existence checking through the LIST command.
- Automated staging of files from UNITREE tape archive using GRIDFTP (= MSSFTP).
- Full access (i.e., by non-Trebuchet-related code), if so desired, to source and target paths during and after operations.
- Thread-pooled parallel copy operations.
- Automated use of multiple GRIDFTP connections for a given endpoint (as specified by the SPAS command), when available, for non-striped operations.
- Fail-over and retry capabilities on a file-by-file basis.
- Flexibility in the kind and number of events the user can opt to receive.
Design overview
Scheme-to-protocol mapping
The basis for Trebuchet's multi-protocol functionality lies in mapping (via the Eclipse-RCP extension-point mechanism) URI schemes to a set of implementations.
As an example, let us consider the ssh
protocol; in order to support the available operations (in this case, all of them), the following classes needed to be implemented:
Function |
Abstract Class |
Concrete Class |
---|---|---|
exists, is file, is dir |
|
|
ls |
|
|
touch |
|
|
mkdir |
|
|
rm |
|
|
cp, mv |
|
|
Then a scheme-to-client mapping needed to be provided via extensions to the ncsa.tools.trebuchet.core.clientTypes
extension point:
Operation |
Source Scheme |
Target Scheme |
Client |
---|---|---|---|
verify |
|
|
|
list |
|
|
|
touch |
|
|
|
mkdir |
|
|
|
delete |
|
|
|
copy |
|
|
|
copy |
|
|
|
copy |
|
|
|
This mapping is referred to when Trebuchet processes a URI or URIs for a given operation, so that the URI schemes indicate which client to use for the operation.
There are two other classes, the PooledClientGenerator
and ListToCopyConverter
which also need to be mapped for each protocol, but usually the default implementations for these classes will be sufficient; also, depending on the file system, a special parser may be necessary for interpreting directory-listing lines, but in most cases the core parsers will work. Finally, for each scheme associated with the protocol, a small definition class implementing ncsa.tools.trebuchet.schemes.IScheme
needs to be created; this class defines the underlying protocol used for the operation for Trebuchet's internal use, representing the operations which the protocol can support.
The schemes which have been implemented in the current version of Trebuchet are listed here.
Operation Caches
It is not necessary here to describe all the layers which constitute Trebuchet's architecture, but some notion of the bottom-most layer is useful for an understanding of how Trebuchet works. This layer consists of a binary file for the operation, accessed using Java's NIO
library, and abstracted out as a Trebuchet Cache object. This is admittedly something of a misnomer, since no entries are actually being cached in memory, and therefore no fixed size is maintained by booting entries from it; but it is cache-like in that it provides an access-point through which all aspects of an operation pass and, under normal conditions (successful termination of the entire operation), is transient, i.e., deleted (the cache can optionally be held on to after the operation; if the operation is incomplete or failed it can be restarted from its cache(s) without having to generate the listings from scratch or redo the successful transfers).
There are two standard caches, one for listing or scanning operations, and one for copy or transfer operations. When a copy operation relies on scanning or listing to provide it with the source locations, there is a conversion procedure (supplied by the ListToCopyConverter
mentioned above) for creating the copy cache entries from the associated list cache entries. Scanned operations for touch
, delete
and copy
by default do the conversion asynchronously using a listener API: as a list entry is added to the list cache, one listener passes it to the converter to be added to the copy cache, while another listener is responsible for passing off the copy entries to the appropriate client as they become available (multiple clients are pooled and this single listener agent designates work for them as they become free). There is an option to override this behavior such that the entire listing or scanning be done first, but in most cases the parallelized list-convert-copy is to be preferred.
The reasons for making all operations rest on a disk-I/O layer are primarily:
- Greater scalability: large or deeply recursive directory copies, for instance, can be handled without risk of running out of memory;
- Greater reliability: because the cache serves as a full operation log, the failed parts of the operation can be retried simply by pointing to the original cache; moreover, the cache will be there should the JVM in which the operation was running crash.
As stated above, the underlying cache file is written in binary. The following tables describe the byte-structure of its respective entry. As can be seen, these are organized similarly to network packets.
LIST CACHE ENTRY
Fixed length entry "header" = 65 bytes. The subscripted properties are specific to the metadata returned by a given file system. "Length" refers to the number of bytes in a variable-length segment of the entry itself; size refers to file size in bytes.
CONTENTS |
TYPE |
BYTE POSITION |
||
---|---|---|---|---|
status |
|
0 |
||
entry id |
|
1 |
||
previous id |
|
9 |
||
type |
|
17 |
||
symlinked parent |
|
18 |
||
mode |
|
19 |
||
links |
|
21 |
||
size |
|
25 |
||
modified |
|
33 |
||
user length |
|
41 |
||
group length |
|
45 |
||
relative dir length |
|
49 |
||
name length |
|
53 |
||
symlink length |
|
57 |
||
n = num properties |
|
61 |
||
property name i length |
|
65 + 8i |
||
property value i length |
|
69 + 8i |
||
user |
|
65 + 8n |
||
group |
|
65 + 8n + user length |
||
relative dir |
|
65 + 8n + user length + group length |
||
name |
|
65 + 8n + user length + group length + relative dir length |
||
symlink |
|
65 + 8n + user length + group length + relative dir length + name length |
||
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="c5bf49a7-b908-4e01-ad92-df3e3b32b2b3"><ac:plain-text-body><![CDATA[ |
property name i |
|
65 + 8n + user length + group length + relative dir length + name length + symlink length + ?[0 <= k < i] property name k length |
]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="9f232e84-8ec2-4fad-88bb-a14b6a7d1e52"><ac:plain-text-body><![CDATA[ |
property value i |
|
65 + 8n + user length + group length + relative dir length + name length + ?[0 <= k < n] property name k length + ?[0 <= k < i] property value k length |
]]></ac:plain-text-body></ac:structured-macro> |
<ac:structured-macro ac:name="unmigrated-wiki-markup" ac:schema-version="1" ac:macro-id="04e114b6-c448-4fb3-8b2c-3efd9c9b5780"><ac:plain-text-body><![CDATA[ |
(end) |
|
65 + 8n + user length + group length + relative dir length + name length + ?[0 <= k < n] property name k length + ?[0 <= k < n] property value k length |
]]></ac:plain-text-body></ac:structured-macro> |
COPY CACHE ENTRY
Fixed length entry "header" = 90 bytes. As before, "length" refers to the number of bytes in a variable-length segment of the entry itself; size refers to file size in bytes.
CONTENTS |
TYPE |
BYTE POSITION |
---|---|---|
entry id |
|
0 |
previous id |
|
8 |
status |
|
16 |
type |
|
17 |
first update |
|
18 |
last update |
|
26 |
retry count |
|
34 |
duplicate target tag |
|
38 |
source size |
|
42 |
tmp size |
|
50 |
target size |
|
58 |
source modified |
|
66 |
source length |
|
74 |
symlink length |
|
78 |
tmp length |
|
82 |
target length |
|
86 |
source |
|
90 |
symlink |
|
90 + source length |
tmp target |
|
90 + source length + symlink length |
target |
|
90 + source length + symlink length + tmp target length |
(end) |
|
90 + source length + symlink length + tmp target length + target length |
The contents of a cache can be printed in human-readable form either by calling the print
method on the cache object (programmatically), or by using the TrebuchetCacheReader
from the command-line and pointing it at the cache file. The output can be viewed in either full (verbose) or abbreviated format. There is also an Ogrescript task <print-cache>
.