Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

Main Topics

Schedule

Speakers

Types of presentation

Titles (tentative)

 

 

 

 

 

Workshop Day 1 (Auditorium)

Monday Nov. 22cd

 


 

Welcome and Introduction

08:30

Franck Cappello, INRIA & UIUC, France and Thom dunning, NCSA, USA

Background

Workshop details

Post PetaScale and Exascale Systems 

08:45

Mitsuhisa Sato, U. Tsukuba, Japan

Trends in HPC

Next Gen and Exascale initiative in Japan

 

09:15

Marc Snir, UIUC, USA

Trends in HPC

Toward Exascale

 

09:45

Wen Mei Wu, UIUC, USA

Trends in HPC


Extreme-Scale Heterogeneous Computing

 

10:15

Arun Rodrigues, Sandia, USA

Trends in HPC

The UHPC X-Caliber Project

 

10:45

Break

 

 

Post Petascale Applications  and System Software

11:15

Pete Beckman, ANL, USA

Trends in HPC

Exascale Sofware Center

 

11:45

Michael Norman, SDSC, USA

Trends in HPC ENZO

Extreme Scale AMR for Hydrodynamic Cosmology

 

12:15

Eric Bohm, UIUC, USA

Trends in HPC

NAMD

 

12:30

Lunch

 

 

 

 

 

 

 

 

 

 

 

 

BLUE WATERS

14:00

Bill Kramer, NCSA, USA

Overview

Update on Blue Waters

Collaborations on System Software

14:30

Ana Gainaru, NCSA, USA

Early Results

A Framework for System Event Analysis

 

15:00

Thomas Ropars, INRIA, France

Results

Latest Progresses on Rollback-Recovery Protocols for Send-Deterministic Applications

 

15:30

Esteban Menese, UIUC, USA

Early Results

Clustering Message Passing Applications to Enhance Fault Tolerance Protocols

 

16:00

Break

 

 

Collaborations on System Software

16:30

Leonardo Bautista, Titech, Japan

Results/International collaboration with Japan

Transparent low-overhead checkpoint for GPU-accelerated clusters

 

17:00

Gabriel Antoniu, INRIA/IRISA, France

Results

Concurrency-optimized I/O for visualizing HPC simulations: An Approach Using Dedicated I/O cores

 

17:30

Mathias Jacquelin, INRIA/ENS Lyon

Results

Comparing archival policies for BlueWaters

 

18:00

Olivier Richard, Joseph Emeras, INRIA/U. Grenoble, France

Early Results

Studying the RJMS, applications and File System triptych: a first step toward experimental approach

 

18:30

Torsten Hoefler, NCSA, USA

Potential collaboration

TBA

 

 

 

 

 

Workshop Day 2 (Auditorium)

Tuesday Nov. 23rd

 

 

 

 

 

 

 

 

Collaborations on System Software

08:30

Frederic Viven, INRIA/ENS Lyon, France

Potential collaboration

On Scheduling Checkpoints of Exascale Application

Collaborations on Programming models

09:00

Thierry Gautier

Early Results

TBA

 

09:30

Jean François Méhaut, INRIA/U. Grenoble, France

Early Results

Charm++ on NUMA Platforms: the impact of SMP Optimizations and a NUMA-aware Load Balancing

 

10:00

Emmanuel Jeannot, INRIA/U. Bordeaux, France

Early Results

TBA

 

10:30

Break

 

 

 

11:00

Raymon Namyst, INRIA/U. Bordeaux, France

Early Results

TBA

 

11:30

Brian Amedo, INRIA/U. Nice, France

Potential collaboration

Improving asynchrony in an Active Object model

 

12:00

Christian Perez, INRIA/ENS Lyon, France

Early Results

High Performance Component with Charm++ and OpenAtom

 

12:30

Lunch

 

 

Collaborations on Numerical Algorithms and Libraries

14:00

Luke Olson, Bill Gropp, UIUC, USA

Early Results

On the status of algebraic (multigrid) preconditioners

 

14:30

Simplice Donfac, INRIA/U. Paris Sud, France

Early Results

TBA

 

15:00

Desiré Nuentsa, INRIA/IRISA, France

Early Results

Parallel Implementation of deflated GMRES in the PETSc package

 

15:30

Sebastien Fourestier, INRIA/U. Bordeaux, France

Early Results

TBA

 

16:00

Break

 

 

 

16:30

Marc Baboulin, INRIA, U. Paris Sud, France

Early Results

Accelerating linear algebra computations with hybrid GPU-multicore systems

 

17:00

Daisuke Takahashi, U. Tsukuba, Japan

Results/International collaboration with Japan

Optimization of a Parallel 3-D FFT with 2-D Decomposition

 

17:30

Alex Yee, UIUC, USA

Early Results

A Single-Transpose implementation of the Distributed out-of-order 3D-FFT

 

17:50

Jeongnim Kim, NCSA, USA

Early Results

Toward petaflop 3D FFT on clusters of SMP

 

 

 

 

 

 

 

 

 

 

Workshop Day 3 (Auditorium)

Wednesday Nov 24th

 

 

 

 

 

 

 

 

Break out sessions introduction

8:30

Cappello, Snir

Overview

Objectives of Break-out, expected results
Collaborations mechanisms (internship, visits, etc.)

Topics

 

Participants

Other NCSA participants

 

Break out session 1

9:00-10:30

 

 

 

Routing, topology mapping, scheduling, perf. modeling

 

Snir, Hoefler, Vivien, Gautier, Jeannot, Kale

 

Room

3D-FFT

 

Cappello, Takahashi, Yee, Jeongnim

 

Room

Libraries

 

Gropp, Baboulin, Désiré, Simplice, Sébastien, Fourestier

 

Room

 

 

 

 

 

 

10:15

Break

 

 

Break out session 2

10:30-12:00

 

 

 

Resilience

 

Kramer, Cappello, Gainaru, Ropars, Menese, Beautista,

 

Room

Programing models / GPU

 

Kale, Méhaut, Namyst, Wu, Amedro, Perez, Hoefler, Jeannot

 

Room

I/O

 

Snir, Viven, Jaquelin, Antoniu, Richard

 

 

Break out session report

12:00

Speakers: Snir, Cappello, Gropp, Kramer, Kale

 

Auditorium

Closing

12:30

Cappello, Snir

 

Auditorium

 

13:00

Lunch

 

 

...

Reaching the goal of exascale will require massive improvements in hardware performance, power consumption, software scalability, and usability. To address these issues Sandia National Labs is leading the UHPC X-Caliber Project, with the goal of producing a prototype single cabinet capable of a sustained petaflop by 2018. Our approach focuses on solving the data movement problem with advanced memory and silicon photonics, enabled by advances in fabrication and 3D packaging, and unified by a new execution model. I present the current X-Caliber architecture, our design space, and the co-design philosophy which will guide the project.

Anchor
Norman_A
Norman_A

Michael Norman, SDSC

Extreme Scale AMR for Hydrodynamic Cosmology

Cosmological simulations present well-known difficulties scaling to large core counts because of the large spatial inhomogeneities and vast range of length scales induced by gravitational instability. These difficulties are compounded when baryonic physics is included which introduce their own multiscale challenges. In this talk I review efforts to scale the Enzo adaptive mesh refinement hydrodynamic cosmology code to O(100,000) cores, and I also discuss Cello--an extremely scalable AMR infrastructure under development at UCSD for the next generation of computer architectures which will underpin petascale Enzo.

Anchor
Ropars_A
Ropars_A

Thomas Ropars, INRIA

...