You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 54 Next »



UNDER construction: The agenda below is not the final one

This event is supported by INRIA, UIUC, NCSA, ANL

Main Topics

Schedule

            Speaker

Affiliation

Type of presentation

Title (tentative)

Download

 

 

 

 

 

 

 

Dinner Before the Workshop

7:30 PM

Only people registered for the dinner

 

 

Valpré hotel

 

 

 

 

 

 

 

 

Workshop Day 1

Wednesday June 12th

 

 

 

 

 

 

 

 

 

 

TITLES ARE TEMPORARY (except if in bold font)

 

Registration

08:00

 

 

 

 

 

Welcome and Introduction

Amphitheatre

08:30

Marc Snir + Franck Cappello

INRIA&UIUC&ANL

Background

Welcome, Workshop objectives and organization

 

 

08:45

Bill Kramer

UIUC

Background

NCSA updates and vision of the collaboration

 

 

09:00

Marc Snir

ANL

Background

ANL updates vision of the collaboration

 

 

09:15

Frederic Desprez

Inria

Background

INRIA updates and vision of the collaboration

 

Big systems
Chair: Christian Perez

9:30

Bill Kramer

UIUC

Background

Update on BlueWaters

 

 

10:00

Break

 

 

 

 

 

10:30

Mitsuhisa Sato

U. Tsukuba & AICS

Background

AICS and the K computer

aics-130612.pptx

CANCELED

11:00

Paul Gibbon

Juelich

Background

Meeting the Exascale Challenge at the Juelich Supercomputing Centre.

 

Resilience&fault tolerance  and simulation
Chair: Franck Cappello

11:00

Marc Snir

ANL&UIUC

Report

ICIS report on Resilience

 

 11:30Vincent BaudouiTotal & ANLJoint-ResultsRound-off error and silent soft error propagation in exascale applications 

 

12:00

Lunch

 

 

 

 

Numerical Algorithms
Chair: Frederic Desprez

13:30

Bill Gropp

UIUC

Background

Topics for Collaboration in Numerical Libraries

 


14:00

Paul Hoveland

ANL

Background

Argonne strategic plan in applied math

 

 

14:30

Marc Baboulin

INRIA

Background

Using condition numbers to assess numerical quality in high-performance computing applications

 

 

15:00

Luke Olson

UIUC 

Background

Opportunities in developing a more robust and scalable multigrid solver

 

 15:30Break    

 

16:00

Frederic Nataf

INRIA&P6

Background

Toward black-box adaptive domain decomposition methods

 

Resilience&fault tolerance  and simulation

Chair: Franck Cappello

16:30Bogdan NicolaeIBMJoint Result

AI-Ckpt: Leveraging Memory Access Patterns for Adaptive Asynchronous Incremental Checkpointing

 
 17:00Martin QuisonINRIAResultImproving Simulations of MPI Applications Using A Hybrid Network Model with Topology and Contention Support 

 

17:30

Adjourn

 

 

 

 

 

18:45

Bus for Diner

 

 

 

 

 

 

 

 

 

 

 

Workshop Day 2

Thursday June 13th

 

 

 

 

 

 

 

 

 

 

 

 

Programming Models (cont.)
Chair: Frederic Desprez

08:30

Jean-François Mehaut 

INRIA

Result

Progresses in the European FP7 Mont-Blanc 1 project and objectives of its follow up: Mont-Blanc 2

 

 

09:00

Rajeev Thakur

ANL

Background

Update on MPI and OS/R Activities at Argonne

 

 

09:30

Andra Ecaterina Hugo

INRIA

Results 

Composing multiple StarPU applications over heterogeneous machines: a supervised approach

 

 

10:00

Celso Mendes

UIUC

Background

Dynamic Load Balancing for Weather Models via AMPI

 

 

10:30

Break

 

 

 

 

Big Data, I/O, Visualization
Chair: Kate Keahey

11:00

Dries Kimpe

ANL

Results

Triton: Exascale Storage

 

 

11:30

Gilles Fedak

INRIA

Result

Active Data: A Programming Model to Manage Data Life Cycle Across Heterogeneous Systems and Infrastructures

 

 

12:00

Matthieu Dorrier

INRIA

Joint Result

Data Analysis of Ensemble Simulations: an In Situ Approach using Damaris

 

 

12:30

Ian Foster

ANL

Background

TBA

 

 

13:00

Lunch

 

 

 

 

 

 

 

 

 

 

 

Mini Workshop1

 

 

 

 

 

 

Resilience
Chair: Marc Snir 

14:00

Ana Gainaru

UIUC

Results

Challenges in predicting failures on the Blue Waters system.

 

 

14:30

Xiang Ni

UIUC 

Results

ACR: Automatic Checkpoint/Restart for Soft and Hard Error Protection.

 

 

15:00

Tatiana Martsinkevich

INRIA & ANL

Result

On the feasibility of message logging in hybrid hierarchical FT protocols

 

 

15:30

Mohamed Slim Bouguerra

INRIA & ANL

Result

Investigating the probability distribution of false negative failure alerts in HPC systems

 

 

16:00

Break

 

 

 

 

 

16:30

Amina Guermouche

UVSQ

Result 

Multi-criteria Checkpointing Strategies: Response-time versus Resource Utilization

 

 

17:00

Thomas Ropars

EPFL

Result

Towards efficient replication of HPC applications to deal with crash failures

 

 

17h30

Mehdi Diouri

INRIA 

Result

ECOFIT: A Framework to Estimate Energy Consumption of Fault Tolerance Protocols for HPC Applications

 

 

18:00

Adjourn

 

 

 

 

 

 

 

 

 

 

 

Mini Workshop2

 

 

 

 

 

 

Numerical Algorithms and Libraries
Chair:  Bill Gropp 

14:00

Jean Utke

ANL

Result

Designing and implementing a tool-indedendent, adjoinable MPI wrapper library

 

 

14:30

Laurent Hascoet

INRIA

Result

The adjoint of MPI one-sided communications

 

 

15:00

Stefan Wild,

ANL

Result

Loud computations? Noise in iterative solvers

 

 

15:30

Jed Brown

ANL

Result

Vectorization, communication aggregation, and reuse in stochastic and temporal dimensions

 

 

16:00

Break

 

 

 

 

 

16:30

Yushan Wang

INRIA P11

Result

TBA

 

 

17:00

Frederic Hecht

INRIA/P6

Result

TBA

 

 

18:00

Adjourn

 

 

 

 

 

 

 

 

 

 

 

 

18:45

Bus for diner

 

 

Lyon

 

 

 

 

 

 

 

 

Workshop Day 3

Friday June 14th

 

 

 

 

 

 

 

 

 

 

 

 

Mini Workshop1 (cont.)

 

 

 

 

 

 

Resilience
Chair:  Franck Cappello.

08:30

Di Sheng

INRIA

Result

TBA 

 

 

09:00

Guillaume Aupy

INRIA

Result

On the Combination of Silent Error Detection and Checkpointing

 

 

09:30

Discussion

 

 

 

 

 

10:00

Break

 

 

 

 

Mini Workshop3 

10:30

Guillaume Mercier

INRIA

Result

TBA

 

Programming and Scheduling 
Chair:  Rajeev Thakur

11:00

Vincent Lanore

INRIA

Result

Static 2D FFT adaptation through a component model based on Charm++

 

 

11:30

Anne Benoit

INRIA

Result

Energy-efficient scheduling

 

 

12:00

François Tessier

INRIA

Result

TBA

 

 

12:30

Discussions

 

 

 

 

 

13:00

Closing and Lunch

 

 

 

 

 

 

 

 

 

 

 

Mini Workshop2 (cont.)

 

 

 

 

 

 

Numerical Algorithms and Libraries 
Chair:  Paul Hovland

08:30

François Pellegrini

INRIA

Result

Shared memory parallel algorithms in Scotch 6

 

 

09:00

Luc Giraud

INRIA 

Result

TBA

 

 

09:30

Discussions

 

 

 

 

 

10:00

Break

 

 

 

 

Mini Workshop4 

10:30

Kate Keahey

ANL 

Result

TBA

 

Clouds 
Chair:  Frederic desprez

11:00

Gabriel Antoniu

INRIA

Result

TBA

 

 

11:30

Christian Perez

INRIA

Result

TBA

 

 

12:00

Eddy Caron

INRIA

Result

TBA

 

 

12:30

Discussions

 

 

 

 

 

13:00

Closing and Lunch

 

 

 

 

Abstracts

Paul Gibbon

Meeting the Exascale Challenge at the Juelich Supercomputing Centre.

This talk will address recent developments in the field of supercomputing research at JSC, beginning with an overview of petascale hardware installed since 2009 together with our present user support infrastructure. Over the coming 5 years the JSC roadmap for exascale computing will leverage the work performed in three `Exascale Centres' - the Exascale Innovation Lab (with IBM), Exa-Cluster Lab (Intel, Partec) and NVIDIA Lab, jointly staffed with the respective industrial partners. Software support will continue to revolve around our `Simulation Laboratories' and Cross-Sectional Teams, providing high-level algorithmic expertise in a number of disciplines such as climate research, energy materials and life sciences, all strongly represented at FZ-Jülich. These and other selected research activities will be briefly reviewed.

 

Martin Quison

Improving Simulations of MPI Applications Using A Hybrid Network Model with Topology and Contention Support

Proper modeling of collective communications is essential for understanding the behavior of medium-to-large scale parallel applications, and even minor deviations in implementation can adversely affect the prediction of real-world performance. We propose a hybrid network model extending LogP based approaches  to account for topology and contention in high-speed TCP networks. This model is validated within SMPI, an MPI implementation provided by the SimGrid simulation
toolkit. With SMPI, standard MPI applications can be compiled and run in a simulated network environment, and traces can be captured without incurring errors from tracing overheads or poor clock synchronization as in physical experiments. SMPI provides features for simulating applications that require large amounts of time or resources, including selective execution, ram folding, and off-line replay of execution traces. We validate our model by comparing traces produced by SMPI with those from other simulation platforms, as well as real world environments.


Frederic Nataf

Toward black-box adaptive domain decomposition methods

Domain decomposition methods address in a natural and powerful way modern parallel architectures. In order to be scalable, these methods involve coarse spaces. These coarse spaces are specifically designed for the two-level methods to be scalable and robust with respect to the coefficients in the equation and the choice of the decomposition. We achieve this in an automatic way by solving generalized eigenvalue problems on the interfaces between subdomains to identify the modes which slow down convergence.This construction allows for a black-box implementation. Theoretical bounds for the condition numbers of the preconditioned operators which depend only on a chosen threshold and the maximal number of neighbours of a subdomain are presented and proved. Scalable implementations on HPC platforms make it possible to solve problems with several billions of unknowns in three dimensions using FreeFem++ DSL for finite element simulations.


Marc Baboulin

Using condition numbers to assess numerical quality in high-performance computing applications

We explain how condition numbers of problems can be used to assess the quality of a computed solution. We illustrate our approach by considering the example of overdetermined linear least squares (linear systems being a special case of the latter). Our method is based on deriving exact values or estimates for the condition number of these problems. We describe algorithms and software to compute these quantities using standard parallel libraries. We present numerical experiments in a physical application and we propose performance results using new routines on top of the multicore-GPU library MAGMA.


Jean François Mehaut

 Progresses in the European FP7 Mont-Blanc 1 project and objectives of its follow up: Mont-Blanc 2


Amina Guermouche

Multi-criteria Checkpointing Strategies: Response-time versus Resource Utilization

Failures are increasingly threatening the efficiency of HPC systems, and current projections of Exascale platforms indicate that rollback recovery, the most convenient method for providing fault tolerance to general-purpose applications, reaches its own limits at such scales. One of the reasons explaining this unnerving situation comes from the focus that has been given to per-application completion time, rather than to platform efficiency. In this talk, we discuss the case of uncoordinated rollback recovery where the idle time spent waiting recovering processors is used to progress a different, independent application from the system batch queue. We then propose an extended model of uncoordinated checkpointing that can discriminate between idle time and wasted computation. We instantiate this model in a simulator to demonstrate that, with this strategy, uncoordinated checkpointing per application completion time is unchanged, while it delivers near-perfect platform efficiency.


Anne Benoit

Energy-efficient scheduling

Jean Utke

Designing and implementing a tool-indedendent, adjoinable MPI wrapper library

The efficient computation of gradients by the "adjoint-mode" of algorithmic differentiation (AD) entails the inversion of MPI communication graphs. The logic to be implemented for adjoining non-blocking communication patterns is sufficiently complex to warrant  a design of components that is independent of the algorithmic differentiation tool that provides the context in which the adjoint communication is to take place. We discuss (i) how we account for the different data models  implied by the AD  tool as well as the target language, (ii) the implementation choices among the possible adjoint communications, and (iii) the currently known limitations of our approach. We hope for feedback from the community regarding this design particularly  with respect to performance and current developments in the MPI standard.

Laurent Hascoet

The adjoint of MPI one-sided communications
Computing gradients of numerical models by the adjoint mode of algorithmic differentiation is a crucial ingredient for model optimization, sensitivity analysis, and uncertainty quantification of many large-scale science and engineering applications. The adjoint mode implies a reversal of the data dependencies and consequently a reversal of communications in parallelized models. Building on previous studies regarding the adjoining of MPI two-sided communications, we investigate the construction of adjoints for certain one-sided MPI communications


Mehdi Diouri

ECOFIT: A Framework to Estimate Energy Consumption of Fault Tolerance Protocols for HPC Applications

Energy consumption and fault tolerance are two interrelated issues to address for designing future exascale systems. Fault tolerance protocols used for checkpointing have different energy consumption depending on parameters like application features, number of processes in the execution and platform characteristics. Currently, the only way to select a protocol for a given execution is to run the application and monitor the energy consumption of different fault tolerance protocols. This is needed for any variation of the execution setting. To avoid this time and energy consuming process, we propose an energy estimation framework. It relies on an energy calibration of the considered platform and a user description of the execution setting. We evaluate the accuracy of our estimations with real applications running on a real platform with energy consumption monitoring. Results show that our estimations are highly accurate and allow selecting the best fault tolerant protocol without pre-executing the application.


Matthieu Dorier

Data Analysis of Ensemble Simulations: an In Situ Approach using Damaris
As we approach exascale, simulations running on ever more cores on supercomputers produce ever larger data that has to be stored for subsequent analysis. With unmatched storage and computation performance, in situ analysis has been proposed as a way to run analysis tasks along with the running simulation. While this reduces the need to store massive amounts of raw data and lets scientists get a direct insight into their simulation, it does not allow to compare multiple runs of the same simulation (ensemble simulations), as these runs are not performed at the same moment. Thus in situ approaches remain limited and ensemble simulations still requires to store raw data. We present a complete framework for comparing data produced by different runs of the same simulation. This framework uses the Damaris I/O middleware to re-load data from previous experiments inside a running instance of the simulation, allowing a direct in situ comparison of data between older and current runs.


Gille Fedak

Active Data: A Programming Model to Manage Data Life Cycle Across Heterogeneous Systems and Infrastructures
The Big Data challenge consists in managing, storing, analyzing and visualizing these huge and ever growing data sets to extract sense and knowledge.  As the volume of data grows exponentially, the management of these data becomes more complex in proportion. A key point is to handle the complexity of the data life cycle, i.e. the various operations performed on data: transfer, archiving,
replication, deletion, etc. To alleviate the complexity of the data life cycle, we propose Active Data, a programming model to automate and improve the expressiveness of data management applications. We first introduce the concept of data life cycle and define a formal model that allow to expose data life cycle across heterogeneous systems and infrastructures. The Active Data
programming model allows code execution at each stage of the data life cycle: routines provided by programmers are executed when a set of events (creation, replication, transfer, deletion) happen to any data. We implement and evaluate the model with four use cases: a storage cache to Amazon-S3, a cooperative sensor network, an incremental implementation of the MapReduce
programming model and automated data provenance tracking across heterogeneous systems. Altogether, these scenarios illustrate the adequateness of the model to program applications that manage
distributed and dynamic data sets. We also show that applications that do not leverage on data life cycle can benefit from Active Data to improve their performances.

Francois Pellegrini

Shared memory parallel algorithms in Scotch 6

The Scotch software package comprises two libraries: the Scotch sequential library, and the PT-Scotch parallel library. The latter is based on a distributed memory paradigm, and uses MPI to exchange data between processes. The advent of many-core, shared memory, machines imposes to reconsider this approach. The complexity of graph partitioning algorithms is low compared to factorization. A first solution is to reduce communication overhead by running graph partitioning only on a limited number of nodes. A second solution is to make graph partitioning algorithms more efficient, by reducing communication overhead and resorting to shared memory parallelism. This talk will present our first experiments in this direction.

Vincent Baudoui

Round-off error and silent soft error propagation in exascale applications

Future exascale computers will open up new perspectives in numerical simulation, but they will also experience more errors because of their massive scale. We will focus here on round-off errors and on silent soft errors, of which propagation needs to be studied in order to ensure results accuracy. Round-off errors come from numerical calculation finite precision and can lead to catastrophic losses in significant numbers when they accumulate. We will discuss the limits of existing error bounds when facing large scale problems. Soft hardware errors can also perturb computations by randomly flipping memory bits. Some of these errors are automatically corrected but others can propagate silently through the calculations. We will present some strategies to determine the sensitive sections of an application as part of future research work.

Bogdan Nicolae

AI-Ckpt: Leveraging Memory Access Patterns for Adaptive Asynchronous Incremental Checkpointing

With increasing scale and complexity of supercomputing and cloud computing architectures, faults are becoming a frequent occurrence, which makes reliability a difficult challenge. Although for some applications it is enough to restart failed tasks, there is a large class of applications where tasks run for a long time or are tightly coupled, thus making a restart from scratch unfeasible. Checkpoint-Restart (CR), the main method to survive failures for such applications faces additional challenges in this context: not only does it need to minimize the performance overhead on the application due to checkpointing, but it also needs to operate with scarce resources. To this end, this paper contributes with a novel approach that leverages both the current and past memory access pattern in order to optimize the order in which memory pages are flushed to stable storage during asynchronous checkpointing. Large scale experiments show up to 60% improvement when compared to state-of-art checkpointing approaches, all this achievable with an extra memory requirement of less than 5% of the total application memory.

 

Bill Gropp

Topics for Collaboration in Numerical Libraries

This talk will discuss some open problems in numerical libraries for extreme scale systems, including issues currently facing some of the application teams that are currently using the Blue Waters sustained petascale system.


Luke Olson

Opportunities in developing a more robust and scalable multigrid solver

Multigrid methods have increased in robustness in recent years due to new algorithmic advances and new theoretical developments.  The result is a more robust multilevel framework leading to improved convergence for a wider range of non-elliptic problems.  Yet, many of these developments have not been adapted at scale despite their intended use while many of the optimizations could be
strengthened by considering the high-perfromance computing architectures more directly.  In this talk, we discuss a particular example of these recent optimizations in multigrid, to define optimal interpolation, that moves toward a more general framework, and highlight some focused directions for collaboration in this respect.  In addition, recent trends in highthrouput computing have motivated algorithmic changes in the multigrid design.  In this talk, we will also highlight some directions to futher advance multigrid solvers at scale based on this work with collaborion through the Joint Lab.


Paul Hovland

Argonne strategic plan in applied math


Jed Brown

Vectorization, communication aggregation, and reuse in stochastic and temporal dimensions

Transformative computing in science and engineering involves problems posed in more than just the spatial domain: temporal, stochastic, and parameter spaces also play a role. Current methods for solving such problems are predominantly based on the concept that the fundamental building block is the solution of a deterministic PDE model, or perhaps one time step of a transient model. This is practical: it permits comfortable partitioning of mathematical analysis and relatively unintrusive software interfaces, but it eagerly chooses which dimensions are treated sequentially, which are distributed in parallel, etc. These imposed choices leave developers of the PDE models banging their heads against the familiar challenges of efficiently utilizing increasingly precious memory bandwidth, hiding and reducing synchronization costs, and obtaining vectorization. Meanwhile, the stochastic and temporal dimensions provide structure that is ideally suited to extreme-scale architectures, if only they could be promoted to first-class citizens, alongside the spatial dimensions, in algorithmic analysis and in software. Exploiting this structure in ``full-space'' methods will require crosscutting development: improved convergence theory, efficient hardware-adapted algorithms, high-quality software libraries, and programming tools and run-time systems to facilitate the development of libraries and applications. In this talk, I present several examples and propose a guideline for reasoning about efficient mappings of full-space analysis onto parallel computers.

Celso Mende

Dynamic Load Balancing for Weather Models via AMPI

Load imbalances can severely limit the scalability of a parallel application. Typically, the solution adopted to overcome this problem is to change the application code as an attempt to distribute the load more uniformly across the available processors. This solution, however, requires deep knowledge of the application, and needs to be redone as new sources of imbalance arise. In this presentation, we show how an intelligent, adaptive runtime system can help in addressing this problem. Using Adaptime-MPI, an implementation of the MPI standard based on the Charm++ runtime system, we demonstrate how to achieve a better balance without requiring major changes or much knowledge about the application. As a case-study, we show an application of this approach with weather forecasting models, which can suffer from severe imbalances due to several sources, including dynamic variations in the atmosphere. Besides presenting recent results, we also point to some remaining challenges, which make opportunities for further work in this area.

Xiang Ni

ACR: Automatic Checkpoint/Restart for Soft and Hard Error Protection.

As the scale of machines increase, the HPC community has seen a steady decrease in reliability of the systems, and hence an increase in the down time. Moreover, soft errors such as bit flips do not prevent execution but generate incorrect results. Checkpoint/restart is by far the most commonly used fault tolerance method for hard errors, and its efficiency and scalability has been improved with recent research. In this talk, we will discuss a holistic methodology for automatically detecting and recovering from soft or hard faults with minimal application intervention. This is demonstrated by ACR: an automatic checkpoint/restart framework that performs application replication and automatically adapts the checkpoint period using online information about the current failure rate. ACR performs an application- and user-oblivious recovery. We empirically test ACR by injecting failures that follow different distributions for five applications and show low overhead when scaled to 131,072 cores. We also analyze the interaction between soft and hard errors and propose three recovery schemes that explore the trade-off between performance and reliability requirements.

Thomas Ropars,

Towards efficient replication of HPC applications to deal with crash failures


Ana Gainaru

Challenges in predicting failures on the Blue Waters system.

As the size of supercomputers increases, so does the probability of a single component failure within a time frame. With the growing operation cost of extreme scale supercomputers like Blue Waters, the act of predicting failures to prevent the loss of computation hours becomes cumbersome and presents a couple of challenges not encountered for smaller systems. The talk will focus on presenting online failure prediction and analyzing the Blue Water system. We show to what extent online failure prediction is a possibility at petascale and what are the challenges in achieving an effective fault prevention mechanism for Blue Waters.


Mohamed Slim Bouguerra

Investigating the probability distribution of false negative failure alerts in HPC systems

 

As large parallel systems increase in size and complexity, failures are inevitable and exhibit complex space and time dynamics. Several key results have demonstrated that recent advances in event log analysis can provide precise failure prediction. The state-of-the-art in failure prediction provides a ratio of correctly identified failures to the number of all predicted failures of over 90\% and its able to discover around 50\% of all failures in a system. However large part of failures are not predicted and considered as false negative alerts. Therefore, developing  efficient fault tolerance strategies to tolerate failures requires a good  perception and understanding of failure prediction properties and characteristics. In order to study and understand the properties and characteristics of the false negative alerts, we conduct in this paper a statistical analysis to discover the probability distribution of such alerts and their impact on fault tolerance techniques. To this end we study failures logs from different HPC production systems. We show that: (i) surprisingly the false negative distribution has the same nature as the failure distribution; (ii) after adding failure prediction we were able to infer statistical models that describes the inter arrival time between false negative alerts and so current fault tolerance can be applied on these systems; (iii) the current failures traces contain a high amount of correlation between the failure inter arrival time that can be used to improve the failure prediction mechanism. Another important result is that checkpoint intervals can still be computed from existing first order formula when failure distribution is purely random.

 

Rajeev Thakur,

Update on MPI and OS/R Activities at Argonne

This talk will give an update on MPI and OS/R activities at Argonne, including a big new project that is about to start in the area of exascale operating systems and runtime.


Andra Hugo

Composing multiple StarPU applications over heterogeneous machines: a supervised approach

Enabling HPC applications to perform efficiently when invoking multiple parallel libraries simultaneously is a great challenge. Even if a single runtime system is used underneath, scheduling tasks or threads coming from different libraries over the same set of hardware resources introduces many issues, such as resource oversubscription, undesirable cache flushes or memory bus contention. We present an extension of StarPU, a runtime system specifically designed for heterogeneous architectures, that allows multiple parallel codes to run concurrently with minimal interference. Such parallel codes run within scheduling contexts that provide confined execution environments which can be used to partition computing resources. Scheduling contexts can be dynamically resized to optimize the allocation of computing resources among concurrently running libraries. We introduce a hypervisor that automatically expands or shrinks contexts using feedback from the runtime system (e.g. resource utilization). We demonstrate the relevance of our approach using benchmarks invoking multiple high performance linear algebra kernels simultaneously on top of heterogeneous multicore machines. We show that our mechanism can dramatically improve the overall application run time (-34%), most notably by reducing the average cache miss ratio (-50%).


Vincent Lanore

Static 2D FFT adaptation through a component model based on Charm++

Adaptation algorithms for HPC applications can improve performance but their implementation is often costly in terms of development and maintenance. Component models such as Gluon++, which is built on top of Charm++, propose to separate the business code, encapsulated incomponents, and the application structure, expressed through a component assembly. Adaptation of component-based HPC applications can be achieved through the optimization of the assembly. We have studied such an approach with the adaptation to network topology and data size of a gluon++ 2D FFT application. In this talk, we present our work thus far and comment preliminary experimental results on the Grid'5000 platform.


Setefan Wild

The adjoint of MPI one-sided communications
Roundoff errors, discretizations, numerical solutions to systems of equations, and adaptive techniques can destroy the smoothness of processes underlying computations at scale. Such computational noise complicates optimization, sensitivity analysis, and other applications that depend on the simulation output. We present a method for analyzing computational noise and illustrate the insights it enables on a collection of problems based on Krylov solvers.


Guillaume Aupy

On the Combination of Silent Error Detection and Checkpointing 

In this talk, we revisit traditional checkpointing and rollback recovery strategies, with a focus on silent data corruption errors. Contrarily to fail-stop failures, such latent errors cannot be detected immediately, and a mechanism to detect them must be provided. We consider two models: (i) errors are detected after some delays following a probability distribution (typically, an Exponential distribution); (ii) errors are detected through some verification mechanism. In both cases, we compute the optimal period in order to minimize the waste, i.e., the fraction of time where nodes do not perform useful computations. In practice, only a fixed number of checkpoints can be kept in memory, and the first model may lead to an irrecoverable failure. In this case, we compute the minimum period required for an acceptable risk. For the second model, there is no risk of irrecoverable failure, owing to the verification mechanism, but the corresponding overhead is included in the waste. Finally, both models are instantiated using realistic scenarios and application/architecture parameters.


Dries Kimpe

 

Triton: Exascale Storage

In this talk, I will present a status update of our work on Triton, a newly designed exascale era storage system.  In addition to Triton specific information, the presentation will also include a brief discussion about the tools and techniques that help us in implementing and designing Triton. One such tool is the use of discrete event simulation to quickly evaluate algorithms at scale before implementing them in Triton.

Tatiana Martsinkevich

On the feasibility of message logging in hybrid hierarchical FT protocols

abstract: Hybrid hierarchical fault tolerance protocols are a promising solution for providing fault tolerance on large scale. A hybrid hierarchical protocol combines coordinated checkpointing and message logging. A lot of work has been done on more efficient implementation of checkpointing protocols, however there are some questions that stay not fully studied with regards to message logging. Message logging requires some portion of the process memory and logged data is flushed to a safe storage together with the checkpoint. There are several possible strategies to take in case where there is not enough memory to log messages between two checkpoints. Each of the strategies will be discussed.



 

 

 

 



  • No labels