Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Main Topics

Schedule

            Speaker

Affiliation

Type of presentation

Title (tentative)

Download

 

Sunday June 8th

 

 

 

 

 

Dinner Before the Workshop

7:30 PM

Only people registered for the dinner (included)

 

 

Mercure Hotel

 

 

 

 

 

 

 

 

Workshop Day 1

Monday June 9th

 

 

 

 

 

 

 

 

 

 

TITLES ARE TEMPORARY (except if in bold font)

 

Registration

08:00

At Inria Sophia Antipolis

 

 

 

 

Welcome and Introduction

Amphitheatre


08:30

Franck Cappello + Marc Snir + Yves Robert + Bill Kramer + Jesus Labarta

INRIA&UIUC&ANL&BSC

Background

Welcome, Workshop objectives and organization

 

Plenary

Amphitheatre

Chair: Franck Cappello

09:00

Jesus Labarta

BSC

Background

Presentation of BSC activities

 

Mini Workshop

Math app.

Room 1

      
Chair: Paul Hovland09:30Bill GroppUIUC   
 10:00Jed BrownANL   

 

10:30

Break

 

 

 

 


11:00

Ian Masliah

Inria

 


 

 11:30Luke OlsonUIUC   
 12:00Lunch    

Chair: Bill Gropp

13:30

Vincent Baudoui

Inria

 

 

 

 

14:00

Paul Hovland

ANL

 

 

 

 14:30Stephane LanteriInria C2S@Exa: a multi-disciplinary initiative for high performance computing in computational sciences 

Mini Workshop

I/O

Room 1
      
Chair: Rob Ross15:00Wolfgang FringsJSC   
 15:30Break    


16:00

Jonathan JenkinsANL

 

 

 


16:30

Matthieu Dorier

Inria

 

Omnisc'IO: A Grammar-Based Approach to Spatial and Temporal I/O Patterns Prediction


 17:00Adjourn    
 

18:30

Bus for dinner (dinner included)

    
       

Mini Workshop

Runtime

Room 2

 

 

 

 

 

 

Chair: Sanjay Kale

9:30

Pavan Balaji

ANL

 


 
 10:00Augustin DegommeInria Status Report on the Simulation of MPI Applications with SMPI/SimGrid  

 

10:30

Break

 

 

 

 

 

11:00

Ronak BuchUIUC

 



 

11:30

Victor Lopez

BSC

 

 

 

 12:00Lunch    
Chair: Jesus Labarta13:30Xin ZhaoANL   
 14:00Brice VideauInria   
 14:30Pieter BellensBSC   
 15:00Martin QuinsonInria   
 15:30Break    
Chair: Martin Quison16:00Florentino Sainz

Francois Tessier

BSC

Inria

   
 16:30Jean-François MehaudInria   
 17:00Adjourn
 

18:30

Bus for dinner (dinner included)

    
       

Workshop Day 2


Tuesday June 10th

     
       

Formal opening

Amphitheatre

Chair: Bill Kramer

08:30

Marc Snir + Franck Cappello

INRIA&UIUC&ANL

Background


 

 

08:40

TBD

Inria

Background

Inria updates and vision of the collaboration

TBD

 

08:50

Marc Snir

ANL

Background

ANL updates vision of the collaboration

TBD

Plenary

Amphitheatre

09:00

Wolfgan Frings

JSC

Background

JSC activities in HPC

TBD

Mini Workshop

I/O

Room 1
      

Chair: Gabriel Antoniu

09:30

Rob Ross

ANL

 

 

 

 

10:00

Guillaume AupyInria

 

Scheduling the I/O of HPC applications under congestion


 10:30Break    
 11:00Lokman RahmaniInria   

 

11:30

Hongyang Sun

Inria

 

 

 

 

12:00

Lunch

 

 

 

 

Mini Workshop

Runtime

Room 2
      
Chair: Jean François Mehaud09:30Sanjay KaleUIUC   
 10:00Francois TessierFlorentino SainzInriaBSC  DEEP Collective offload 
 10:30BreakInria   
 11:00Arnaud LegrandInria Modeling and Simulation of a Dynamic Task-Based Runtime System for Heterogeneous Multi-Core Architectures 
 11:30Grigori Fursin Inria   
 12:00Lunch    

Formal encouragments

Amphitheatre

Chair: Franck Cappello

13:45Ed SeidelUIUCBackgroundNCSA updates and vision of the collaboration 

Plenary

Amphitheatre

Chair: Wolfgan Frings

14:00Yves RobertInria   
 14:30Marc SnirANL   
 15:00Break    

Mini Workshop

Resilience

Room 1
      
Chair: Franck Cappello15:30Luc JaulmesBSC   
 16:00Ana GainaruUIUC   
 16:30Tatiana MartsinkevichInria   
 17:00Adjourn    

Mini Workshop

Cloud & Cyber-infrastructure

Room 2
      
Chair: Kate Keahey15:30Justin WozniakANL   
 16:00Shaowen WangUIUC CyberGIS @ Scale 
 16:30Christine MorinInria   
 17:00Adjourn    

 

18:30

Bus for Dinner (dinner included)

 

 

 

 

       

Workshop Day 3


Wednesday June 11th

 

 

 

 

 

Plenary

Amphitheatre

Chair: Jesus Labarta

8:30

Bill Kramer

NCSA

 

 

 

Mini Workshop

Resilience

Room 1
      
Chair: Yves Robert9:00Leonardo Bautista GomezANL   
 9:30Slim BougeraInria   
 10:00Break    
 10:30Sheng DiANL   
 11:00Franck CappelloANL Five open questions on Resilience for the Exascale era
 

Plenary

Amphitheatre

11:30Closing    
 12:00Lunch (included)    

Mini Workshop

Cloud & Cyber-infrastructure

Room 2
      
Chair: Christine Morin09:00Kate KeaheyANL   
 09:30Radu TudoranInria   
 10:00Break    
 10:30Sri Hari Krishna NarayananANL   
 11:00Timothy Armstrong ANL   

Plenary

Amphitheatre

11:30Closing    
 12:00Lunch (included)    

...

A significant percentage of the computing capacity of large-scale platforms is wasted due to interferences incurred by multiple applications that access a shared parallel file system concurrently. One solution to handling I/O bursts in large-scale HPC systems is to absorb them at an intermediate storage layer consisting of burst buffers. However, our analysis of the Argonne’s Mira system shows that burst buffers cannot prevent congestion at all times. As a consequence, I/O performance is dramatically degraded, showing in some cases a decrease in I/O throughput of 67%. In this paper, we analyze the effects of interference on application I/O bandwidth, and propose several scheduling techniques to mitigate congestion. We show through extensive experiments that our global I/O scheduler is able to reduce the effects of congestion, even on systems where burst buffers are used, and can increase the overall system throughput up to 56%. We also show that it outperforms current Mira I/O schedulers.

 

Florentino Sainz

DEEP Collective offload

Abstract:  We present a new extension of OmpSs programming model which allows users to dynamically offload C/C++ or Fortran code from one or many nodes to a group of remote nodes. Communication between remote nodes executing offloaded code is possible through MPI. It aims to improve programmability of Exascale and nowadays supercomputers which use different type of processors and interconnection networks which have to work together in order to obtain the best performance. We can find a good example of these architectures in the DEEP project, which has two separated clusters (CPUs and Xeon Phis).  With our technology, which works in any architecture which fully supports MPI, users will be able to easily offload work from the CPU cluster to the accelerators cluster without the constraint of falling back to the CPU cluster in order to perform MPI communications.