You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 37 Next »

UNDER construction: The agenda below is not the final one

This event is supported by INRIA, UIUC, NCSA, ANL and French Ministry of Foreign Affairs

Main Topics

Schedule

            Speaker

Affiliation

Type of presentation

Title (tentative)

Download

 

 

 

 

 

 

 

Sunday Nov. 24th
Dinner Before the Workshop

7:00 PM

Only people registered for the dinner

 

 

 

 

 

 

 

 

 

 

 

Workshop Day 1

Monday Nov. 25th

 

 

 

 

 

 

 

 

 

 

TITLES ARE TEMPORARY (except if in bold font)

 

Registration

08:00

 

 

 

 

 

Welcome and Introduction

Auditorium 1122

Chair: Franck

08:30

Marc Snir + Franck Cappello

INRIA&UIUC&ANL

Background

Welcome, Workshop objectives and organization

 

 

08:45

Peter Schiffer

UIUC Vice Chancellor for Research

UIUC

Background

Welcome from UIUC Vice Chancellor for Research

 

 

09:00

Ed. Seidel

NCSA director

UIUC

Background

NCSA update and vision of the collaboration

 

 

09:15

Michel Cosnard

Inria CEO and President

Inria

Background

INRIA updates and vision of the collaboration

 


9:30

Marc Snir

Director of Argonne/ MCS and co-director of the joint-lab

ANL

Background

Argonne updates and vision of the collaboration

 

 

9h45

Franck Cappello

Co-director of the Joint-lab

ANL

Background

Joint-Lab, New Joint-Lab, PUF articulation

 

 

10:15

Break

 

 

 

 

Extreme Scale Systems and infrastructures

Auditorium 1122

Chair: Marc Snir

10:45

Pete Beckman

ANL

 

Extreme Scale Computing & Co-design Challenges

 

 

11:15

John Towns

UIUC

 

Applications Challenges in the XSEDE Environment

 
 11:45Gabriel AntoniuINRIA  Plenary talk 

 

12:15

Lunch

 

 

 

 


13:45

Bill Kramer

UIUC

Blue Waters

Is Petascale Completely Done?  What Should We Do Now?
 

 


14:15

Marc Snir

UIUC

 

G8 ECS and international collaboration toward extreme scale climate simulation

 

 

14:45

Rob Ross

ANL

 

Thinking Past POSIX: Persistent Storage in Extreme Scale Systems

 
 15:15François PellegriniINRIA Plenary talk 
 15:45Break    
 16:15Pavan BalagiANL   
 16:45Wen Mei HwuUIUC 

Plenary talk

 
 17:15Adjourn    

 

18:45

Bus for Diner

 

 

 

 

 

 

 

 

 

 

 

Workshop Day 2


Tuesday Nov. 26

 

 

 

 

 

Applications, I/O, Visualization, Big data

Auditorium 1122

Chair: Rob Ross

08:30

Greg BauerUIUC  Applications and their challenges on Blue Waters

 

 

09:00

Matthieu Dorier

INRIA

Joint-result, submitted

CALCioM: Mitigating I/O Interferences in HPC Systems through Cross-Application Coordination

 
 

09:30

Dries Kempe

ANL

 

Mercury: Enabling Remote Procedure Call for High-Performance Computing

 

 

10:00

Venkat Vishwanath

ANL

 

Plenary talk

 

 

10:30

Break

 

 

 

 

 

11:00

Babak Behzad

UIUC

ACM/IEEE SC13

Taming Parallel I/O Complexity with Auto-Tuning

 

 

11:30

McHenry, Kenton Guadron

UIUC

 

NSF CIF21 DIBBs: Brown Dog

 

 

12:00

Lunch

 

 


 

 

 

 

 

 

 

 

Mini Workshop1

Resilience

Room 1030

Chair: Yves Robert

 

 

 

 

 

 

 

13:30

Leonardo

ANL

Joint-result


 

 

14:00

Tatiana Martsinkevich

INRIA

Joint-result

On the feasibility of message logging in hybrid hierarchical FT protocols

 

 

14:30

Mohamed Slim Bouguera

INRIA

Joint-result, submitted

 

Failure prediction: what to do with unpredicted failures ?

 

 

15:00

Ana Gainaru

UIUC

Joint-result, submitted

Topology and behaviour aware failure prediction for Blue Waters.

 

 

15:30

Break

 

 

 

 

 

16:00

Sheng Di

INRIA

Joint-result, submitted

 

Optimization of Multi-level Checkpoint Model for Large Scale HPC Applications
 

 

16:30

Yves Robert

INRIA

 

Assessing the impact of ABFT & Checkpoint composite strategies

 

 

17h00

Weslay Bland

ANL

 

Fault Tolerant Runtime Research at ANL

 

 

17H30

Adjourn

 

 

 

 

 

19:00

Bus for Diner

 

 

 

 

       

Mini Workshop2

Numerical Agorithms

Room 1040

Chair: Bill Gropp

 

 

 

 

 

 

 

13:30

Luke Olson

UIUC

 

  
 14:00 Prasanna BalaprakashANL  Active-Learning-based Surrogate Models for Empirical Performance Tuning 

 

14:30

Yushan Wang

INRIA

 

Solving 3D incompressible Navier-Stokes equations on hybrid CPU/GPU systems.

 

 

15:00

Jed Brown

ANL

 

 Fast solvers for implicit Runge-Kutta systems

 

 

15:30

Break

 

 

 

 

 

16:00

Pierre Jolivet

INRIA

Best Paper nomiee, IEEE, ACM SC13

Scalable Domain Decomposition Preconditioners For Heterogeneous Elliptic Problems

 
 16:30Vincent BaudouiTotal&ANL Round-off error propagation and non-determinism in parallel applications 
 17:00TBD  TBD 

 

17:30

Adjourn

 

 

 

 

       

 

19:00

Bus for diner

 

 

 

 

 

 

 

 

 

 

 

Workshop Day 3


Wednesday Nov. 27

 

 

 

 

 

 

 

 

 

 

 

 

Mini Workshop3


 

 

 

 

 

 

 Programming models, compilation and runtime.

Room 1030

Chair: Marc Snir

08:30

Grigori Fursin

INRIA

 

 

 

 

09:00

Maria Garzaran

UIUC

 

Optimization by Run-time Specialization for Sparse Matrix-Vector Multiplication

 


09:30

Jean-François Mehaut

INRIA

 

From Multicores to Manycores Processors: Challenging Programming Issues with the MPPA/KALRAY

 
 10:00Break    

 

10:30

Frederic Vivien

INRIA

 

Scheduling tree-shaped task graphs to minimize memory and makespan 

 

 

11:00

Rafael Tesser

INRIA

Joint result PDP 2013


 

 

11:30

Emmanuel Jeannot

INRIA

Joint-result, IEEE Cluster2013

Communication and Topology-aware Load Balancing in Charm++ with TreeMatch

 

 

12:00

Closing

 

 

 

 

 

12:30

Lunch

 

 

 

 

       

 

18:00

Bus for diner

 

 

 

 

Mini Workshop4

Large scale systems and their simulators

Room 1040

Chair: Bill Kramer

 

 

 

 

 

 


08:30

Sanjay Kale

 

 


 

 

09:00

Arnault Legrand

 

 

SMPI: Toward Better Simulation of MPI Applications

 


09:30

Kate Kahey

 

 


 

 

10:00

Break

 

 

 

 


10:30

Gille Fedak

 

 


 

 

11:00

Jeremy Henos

 

 

 Application Runtime Consistency and Performance Challenges on a shared 3D torus.

 

 

11:30

TBD

 

 

 

 

Auditorium 1122

12:00

Closing

 

 

 

 

 

12:30

Lunch

 

 

 

 

       
 18:00Bus for diner    

Abstracts

Kenton McHenry

NSF CIF21 DIBBs: Brown Dog

The objective of this project is to construct a service that will allow for past and present un-curated data to be utilized by science while simultaneously demonstrating the novel science that can be conducted from such data. The proposed effort will focus on the large distributed and heterogeneous bodies of past and present un-curated data, what is often referred to in the scientific community as long-tail data, data that would have great value to science if its contents were readily accessible. The proposed framework will be made up of two re-purposable cyberinfrastructure building blocks referred to as a Data Access Proxy (DAP) and Data Tilling Service (DTS). These building blocks will be developed and tested in the context of three use cases that will advance science in geoscience, biology, engineering, and social science. The DAP will aim to enable a new era of applications that are agnostic to file formats through the use of a tool called a Software Server which itself will serve as a workflow tool to access functionality within 3rd party applications. By chaining together open/save operations within arbitrary software the DAP will provide a consistent means of gaining access to content stored across the large numbers of file formats that plague long tail data. The DTS will utilize the DAP to access data contents and will serve to index unstructured data sources (i.e. instrument data or data without text metadata). Building off of the Versus content based comparison framework and the Medici extraction services for auto-curation the DTS will assign content specific identifiers to untagged data allowing one to search collections of such data. The intellectual merit of this work lies in the proposed solution which does not attempt to construct a single piece of software that magically understands all data, but instead aims at utilizing every possible source of automatable help already in existence in a robust and provenance preserving manner to create a service that can deal with as much of this data as possible. This proverbial “super mutt” of software, or Brown Dog, will serve as a low level data infrastructure to interface with digital data contents and through its capabilities enable a new era of science and applications at large. The broader impact of this work is in its potential to serve not just the scientific community but the general public, as a DNS for data, moving civilization towards an era where a user’s access to data is not limited by a file’s format or un-curated collections.


Emmanuel Jeannot, Esteban Meneses-Rojas, Guillaume Mercier, François Tessier and Gengbin Zheng

Communication and Topology-aware Load Balancing in Charm++ with TreeMatch

Abstract—Programming multicore or manycore architectures is a hard challenge particularly if one wants to fully take advantage of their computing power. Moreover, a hierarchical topology implies that communication performance is heterogeneous and this characteristic should also be exploited. We developed two load balancers for Charm++ that take into account both aspects depending on the fact that the application is compute-bound or communication-bound. This work is based on our TREEMATCH library that compute process placement in order to reduce an application communication cost based on the hardware topology. We show that the proposed load-balancing scheme manages to improve the execution times for the two classes of parallel applications.


Matthieu Dorier

CALCioM: Mitigating I/O Interferences in HPC Systems through Cross-Application Coordination

Unmatched computation and storage performance in new HPC systems have led to a plethora of I/O optimizations ranging from application-side collective I/O to network and disk-level request scheduling on the file system side. As we deal with ever larger machines, the interference produced by multiple applications accessing a shared parallel file system in a concurrent manner become a major problem. Interference often breaks single-application I/O optimizations, dramatically degrading application I/O performance and, as a result, lowering machine wide efficiency.
This talk will focuse on CALCioM, a framework that aims to mitigate I/O interference through the dynamic selection of appropriate scheduling policies. CALCioM allows several applications running on a supercomputer to communicate and coordinate their I/O strategy in order to avoid interfering with one another. In this work, we examine four I/O strategies that can be accommodated in this framework: serializing, interrupting, interfering and coordinating. Experiments on Argonne’s BG/P Surveyor machine and on several clusters of the French Grid’5000 show how CALCioM can be used to efficiently and transparently improve the scheduling strategy between two otherwise interfering applications, given specified metrics of machine wide efficiency.


Babak Behzad

Taming Parallel I/O Complexity with Auto-Tuning

We present an auto-tuning system for optimizing I/O performance of HDF5 applications and demonstrate its value across platforms, applications, and at scale. The system uses genetic algorithms to search a large space of tunable parameters and to identify effective settings at all layers of the parallel I/O stack. The parameter settings are applied transparently by the auto-tuning system via dynamically intercepted HDF5 calls. To validate our auto-tuning system, we applied it to three I/O benchmarks (VPIC, VORPAL, and GCRM) that replicate the I/O activity of their respective applications. We tested the system with different weak-scaling configurations (128, 2048, and 4096 CPU cores) that generate 30 GB to 1 TB of data, and executed these configurations on diverse HPC platforms (Cray XE6, IBM BG/P, and Dell Cluster). In all cases, the auto-tuning framework identified tunable parameters that substantially improved write performance over default system settings. We consistently demonstrate I/O write speedups between 2x and 100x for test configurations.


Yves Robert, ENS Lyon, INRIA & Univ. Tenn. Knoxville

Assessing the impact of ABFT & Checkpoint composite strategies
 

Algorithm-specific fault tolerant approaches promise unparalleled scalability and performance in failure-prone environments. With the advances in the theoretical and practical understanding of algorithmic traits enabling such approaches, a growing number of frequently used algorithms (including all widely used factorization kernels) have been proven capable of such properties. These algorithms provide a temporal section of the execution when the data is protected by it's own intrinsic properties, and can be algorithmically recomputed without the need of checkpoints. However, while typical scientific applications spend a significant fraction of  their execution time in library calls that can be ABFT-protected, they interleave sections that are difficult or even impossible to protect with ABFT.  As a consequence, the only fault-tolerance approach that is currently used for these applications is  checkpoint/restart. In this talk, we propose a model and a simulator to investigate the behavior of a composite protocol,  that alternates  between ABFT and checkpoint/restart protection for effective protection of each phase of an iterative application composed of ABFT-aware and ABFT-unaware sections. We highlight this approach drastically increases the performance delivered by the system, especially at scale, by providing means to rarefy the checkpoints while simultaneously decreasing the volume of data needed to be saved in the checkpoints.


Prasanna Balaprakash

Active-Learning-based Surrogate Models for Empirical Performance Tuning

Performance models have profound impact on hardware-software co-design, architectural explorations, and performance tuning of scientific applications. Developing algebraic performance models is becoming an increasingly challenging task. In such situations, a statistical surrogate-based performance model, fitted to a small number of input-output points obtained from empirical evaluation on the target machine, provides a range of benefits. Accurate surrogates can emulate the output of the expensive empirical evaluation at new inputs and therefore can be used to test and/or aid search, compiler, and autotuning algorithms. We present an iterative parallel algorithm that builds surrogate performance models for scientific kernels and work-loads on single-core and multicore and multinode architectures. We tailor to our unique parallel environment an active learning heuristic popular in the literature on the sequential design of computer experiments in order to identify the code variants whose evaluations have the best potential to improve the surrogate. We use the proposed approach in a number of case studies to illustrate its effectiveness.


Greg Bauer

Applications and their challenges on Blue Waters
The leadership class Blue Waters system is providing petascale level computational and I/O capabilities to its partners. To date there are approximately 32 teams using Blue Waters to pursue their science and engineering on 22,640 Cray XE CPU compute nodes and 4,224 Cray XK GPU nodes with a 26 PB, 1 TB/s filesystem. The challenges encountered by the teams are as varied as the applications running on Blue Waters. This talk will provide an overview of the Blue Waters system, its recent upgrade in GPU computing capability and network dimension, and a discussion of the
applications and their challenges computing at scale on Blue Waters.


Yushan Wang
Solving 3D incompressible Navier-Stokes equations on hybrid CPU/GPU systems.

  The Navier-Stokes equations are the fundamental bases of many computational fluid dynamics problems. In this presentation, we will talk about a hybrid multicore/GPU solver for the incompressible Navier-Stokes equations with constant coefficients, discretized by the finite difference method. We use the prediction-projection method which transforms the Navier-Stokes problem into Helmholtz-like and Poisson problems. Efficient solvers for the two subproblems will be presented with implementations which take advantages of GPU accelerators. We will also provide numerical experiments on a current hybrid machine.


Arnaud Legrand
SMPI: Toward Better Simulation of MPI Applications
We will present our last result on the SMPI/SimGrid framework. SMPI now implements all the collective algorithms and selection logics of both OpenMPI and MPICH and even a few other collective algorithms from Star MPI. Together with a flexible network model and topology description mechanisme, this allowed us to obtain almost perfect prediction of NASPB and BigDFT on Ethernet/TCP based clusters. We are currently working on extending this work to other kind of networks as well as on mixing the emulation capability of SMPI with the trace replay mechanism. We are also working on improving the replay mechanism so that it handles seamlessly classical trace formats.

Welsley Bland
Fault Tolerant Runtime Research at ANL
Fault tolerance has been presented as an emerging problem for decades, with researchers often claiming that the next generation of hardware will introduce new levels of failure rates that will destroy productivity and cause applications to become unusable. While it is true that as machines have scaled, resilience has become more and more of a concern, there are issues already affecting applications at current scales. Process failure remains a concern, though primarily for applications that can run at the largest scales or on very unstable hardware. For smaller applications however, there are other concerns, such as soft errors, performance loss, etc. This talk will cover some of the research being performed in the Programming Models and Runtime Systems group at Argonne National Laboratory to study these phenomena.


Jed Brown and Debojyoti Ghosh  

Fast solvers for implicit Runge-Kutta systems
Implicit Runge-Kutta methods offer very high order accuracy, excellent stability properties, and optional symplecticity at the expense of needing to solve a coupled system of equations.  In the past, this has been seen as a detractor and implicit RK methods have received little attention in the large-scale computing world, apart from recent interest in Spectral Deferred Correction (SDC) methods which are a particular iterative method for solving implicit RK systems, but the work scales quadratically in the number of stages and SDC is rarely more efficient than conventional sequential time stepping.  Implicit RK systems have tensor product structure $$ S \otimes I + I \otimes J $$ where $S = (h A)^{-1}$ comes from the $s\times s$ Butcher table $A$, and $J$ is the (typically sparse) Jacobian of the spatial discretization. Diagonalization of $S$ was proposed independently by Butcher (1976) and Bickert (1977) as a solution method, leading to $s$ decoupled sparse systems, each with a different (complex-valued) diagonal shift, and quickly became the standard approach in the ODE community.  Instead of distributing the stages, we permute the multivector and solve all stages at once using preconditioned iterative methods that achieve much higher machine utilization due to a computational structure similar to solving a single linear system with multiple right hand sides. 


Mohamed Slim Bougerra

Failure prediction: what to do with unpredicted failures ? 

 As large parallel systems increase in size and complexity, failures are inevitable and exhibit complex space and time dynamics. Several key results have demonstrated that recent advances in event log analysis can provide precise failure prediction. The state of the art in failure prediction provides a ratio of correctly identified failures to the number of all predicted failures of over 90\% and  able to discover around 50\% of all failures in a system. However, large parts of failures are not predicted and are considered as false negative alerts. Therefore, developing  efficient fault tolerance strategies to tolerate failures requires a good  perception and understanding of failure prediction  characteristics.  To understand the properties of  false negative alerts, we conducted a statistical analysis of the probability distribution of such alerts and their impact on fault tolerance techniques. Specifically  we studied  failures logs from different HPC production systems. We show that (i)  the false negative distribution has the same nature as the failure distribution (ii) After adding failure prediction, we were able to infer statistical models that describe the inter-arrival time between false negative alerts and hence current fault tolerance can be applied to these systems. Moreover, we show that  the current failures traces have a high correlation between the failure inter-arrival time that can be used to improve the failure prediction mechanism.  Another important result is that checkpoint intervals for unpredicted failures can be computed from the existing high-order Daly's formula. We show how we can apply the proposed statistical-model to combine proactive migration and preventive checkpoints. Trace based simulations show that the proposed combination leads to an improvement of the execution useful work by more than 13\% with only 45\% of recall.


Dries Kimpe
Mercury: Enabling Remote Procedure Call for High-Performance Computing
Remote procedure call (RPC) is a technique that has been largely adopted by distributed services. This technique, now more and more used in the context of high-performance computing (HPC), allows the execution of routines to be delegated to remote nodes, which can be set aside and dedicated to specific tasks. However, existing RPC frameworks assume a socket-based network interface (usually on top of TCP/IP), which is not appropriate for HPC systems, because this API does not typically map well to the native network transport used on those systems, resulting in lower network performance. In addition, existing RPC frameworks often do not support handling large data arguments, such as those found in read or write calls. We present in this paper an asynchronous RPC interface, called Mercury, specifically designed for use in HPC systems. The interface allows asynchronous transfer of parameters and execution requests and provides direct support of large data arguments. Mercury is generic in order to allow any function call to be shipped. Additionally, the network implementation is abstracted, allowing easy porting to future systems and efficient use of existing native transport mechanisms.

Bill Kramer
Is Petascale Completely Done?  What Should We Do Now?
Abstract: As Blue Waters approaches it first anniversary of acceptance, this talk will present the 10 most surprising lessons we learned so far from the worlds first sustained petascale system.  The talk will then offer the 10 most surprising areas the HPC community should be addressing for future large scale systems.

John Towns
Applications Challenges in the XSEDE Environment
XSEDE provides access to an evolving portfolio of high end computing resources, among many other resources and services to a large community of researcher.  Currently, there are more than 7,000 open individual accounts across all XSEDE systems. In this talk, we will look at the leading platforms in recent times for XSEDE (Kraken and Stampede) and discuss some of the challenges faced in bringing application up on them at scale.

 

Tatiana Martsinkevich

On the feasibility of message logging in hybrid hierarchical FT protocols


Frederic Vivien

Scheduling tree-shaped task graphs to minimize memory and makespan
This work investigates the execution of tree-shaped task graphs using multiple processors. Each edge of such a tree represents a large data.  A task can only be executed if all input and output data fit into memory. Such trees arise in the multifrontal method of sparse matrix factorization. The maximum amount of memory needed depends on the execution order of the tasks. With one processor,
the problem of finding the tree traversal with minimum required memory was well studied and optimal polynomial algorithms have been proposed. Here, we extend the problem by considering multiple processors. With the multiple processors comes the additional objective to minimize the makespan. Not surprisingly, this problem proves to be much harder. We study its computational complexity and provide an inapproximability result even for unit weight trees. Several heuristics are proposed, especially for the realistic problem of minimizing the makespan under a strong memory constraint. They are analyzed in an extensive experimental evaluation using realistic trees.

Maria Garzaran

Optimization by Run-time Specialization for Sparse Matrix-Vector Multiplication
Abstract: Run-time specialization is the process of generating programs based on information available only at run time. This technique has the potential of generating highly efficient codes at the expense of the overheads of the run-time code generation. It is applicable when some input data is used repeatedly while other input data varies. In this talk, I explore the potential for obtaining speedups for sparse matrix dense vector multiplication using runtime specialization, in the case where a single matrix is to be multiplied by many vectors. We experiment with several methods involving run-time specialization, comparing them to methods that do not (including INTEL’s MKL library). For this talk, my focus is the evaluation of the speed-ups that can be obtained with run-time specialization without considering the overheads of the code generation. Our experiments use several matrices from the Matrix Market and the University of Florida Sparse Matrix Collection and run on several machines. In most cases, the specialized code runs faster than any version without specialization. The best method depends on the matrix and machine; no method is best for all matrices and machines.

Jean-François Mehaut

From Multicores to Manycores Processors: Challenging Programming Issues with the MPPA/KALRAY
Joint work with M. Castro (UFRGS), E. Francesquini (USP), T. Messi (Yaoundé 1), J-F. Méhaut (UJF-CEA)

The exponential growth in processor performance seems to have reached a turning point. Nowadays, energy efficiency is as important as performance and has become a critical aspect to the development of scalable systems. These strict energy constraints paved the way for the development of multi and manycore processors. In this presentation we analyze a well-known irregular NP-complete problem, the Traveling-Salesman Problem (TSP). This study investigates two aspects of the TSP on multicore, NUMA, and many-core processors. First, we concentrate on the nontrivial task of adapting this application to a manycore, specifically the novel MPPA-256 manycore processor. Then, we analyze its performance and energy consumption on different platforms that comprise general-purpose and low-power multicores, a NUMA machine, and the MPPA-256 manycore. Our results show that applications able to fully use the resources of a manycore can have better performance and may consume 9.8 and 13 times less energy when compared to low-power and general-purpose multicore processors, respectively.


Pierre Jolivet

Scalable Domain Decomposition Preconditioners For Heterogeneous Elliptic Problems
Domain decomposition methods are, alongside multigrid methods, one of the dominant paradigms in contemporary large-scale partial differential equation simulation. I will present a lightweight implementation of a theoretically and numerically scalable preconditioner in the context of overlapping methods. The performance of this work is assessed by numerical simulations executed on thousands of cores, for solving various highly heterogeneous elliptic problems in both 2D and 3D with billions of degrees of freedom. Such problems arise in computational science and engineering,
in solid and fluid mechanics. This framework can also be used for building substructuring preconditioners and for pipelining communication during an iterative process such as a Krylov method.


Vincent Baudoui

Round-off error propagation and non-determinism in parallel applications

 Round-off errors coming from numerical calculation finite precision can lead to catastrophic losses in significant numbers when they accumulate. Their propagation throughout a computation needs to be studied in order to ensure results accuracy. We present a round-off error estimation method based on first order derivatives that can help following error propagation in an execution graph and identifying the sensitive sections of a code. It has been experimented on well known LU decomposition algorithms. In a second part, we focus on the effects of non-determinism in parallel applications where messages exchanged between processes are received in random order, possibly leading to different round-off error accumulations and subsequently to different results at each execution. We study the impact of this non-reproducibility on the convergence of stencil computations after a failure and recovery event.

Jeremy Enos

Application Runtime Consistency and Performance Challenges on a shared 3D torus.

Early testing on Blue Waters revealed varied performance for some applications making required walltimes unpredictable.  Many potential causes were investigated, ultimately indicating that poor placement on to compute resources within the 3D torus network was a chief aggravating factor.  Multiple thrusts of effort were launched to improve both application performance and consistency;  a long term topology-aware placement development plan, improved high speed network monitoring, and immediate "stop gap" measures available within already existing tools and methods.

Ana Gainaru

Topology and behaviour aware failure prediction for Blue Waters.
Failure prediction has made substantial progress in the last 5 years and current studies have shown that failure avoidance techniques could give high benefits when combined with classical fault tolerance protocols. Understanding the properties of a prediction module and exploiting them for enhancing fault tolerance approaches and scheduling decisions is crucial for providing scalable solutions to deal with failures on future HPC systems. 
Recently, we have presented a novel methodology for truly online failure prediction for the Blue Water system. In this talk we described the main bottlenecks and limitations faced in applying failure prediction on a petascale system and proposed a couple of solutions by using topology-level information.
Moreover, we will show that on a real system, system failures are not very frequently translated into application failures. We will present how this is influencing application level failure prediction and future system performance degradation analysis.

Sheng Di
Optimization of Multi-level Checkpoint Model for Large Scale HPC Applications
HPC community projects that future extreme scale systems will be much less stable than current Petascale systems, thus requiring sophisticated fault tolerance to guarantee the completion of large scale numerical computations. Execution failures may occur due to multiple factors with different scales, from transient uncorrectable memory errors localized in processes to massive system outages. Multi-level checkpoint/restart is a promising model that provides an elastic response to tolerate different types of failures. It stores checkpoints at different levels: e.g., local memory, remote memory, using a software RAID, local SSD, remote file system. This talk will respond to two open questions: 1) how to optimize the selection of checkpoint levels based on failure distributions observed in a system, 2) how to compute the optimal checkpoint intervals for each of these levels. (1) A mathematical model is formulated to fit the multi-level checkpoint/restart mechanism with large scale applications regarding various types of failures. (2) The entire execution performance of each parallel application is theoretically optimized by selecting the best checkpoint level combination and corresponding checkpoint intervals at different levels. (3) The proposed optimal solutions is evaluated using both simulation and real environment with real-world MPI programs running on hundreds of cores. Experiments show that optimized selections of levels associated with optimal checkpoint intervals at each level outperforms other state-of-the-art solutions by 5-50 percent.



 

 

 


  • No labels