Invited Speakers


Cleve Ashcraft (LSTC, Livermore)

An industrial perspective on new solvers


LSTC creates multiphysics simulation software that is used in diverse applications. We have a customer base with a problem set that gives rise to certain large sparse linear systems which are solved on diverse hardware/software environments. Direct methods for sparse linear systems have seen a great deal of evolution, from dense to banded methods, to frontal methods with profile orderings, to sparse columns with minimum degree, to dense submatrices with multifrontal, and now to a new type of sparsity, low rank methods including BLR and HODLR (special cases of H-matrices) and HSS/H2 in increasing order of complexity.

Where on this spectrum of direct methods, particularly the new methods, would be suitable to our problem set? We will present some results for direct and iterative methods using multifrontal, BLR and HSS, and discuss the role of low rank methods in solving eigensystems using Block-shifted Lanczos and AMLS.


Costas Bekas (IBM, Switzerland)

HPC Frontiers in Cognitive Computing

We are experiencing an unprecedented increase of the volume of data. Nextto structured data, that originates from sensors, experiments and simulations, unstructured data in the form of text and images poses great challenges for computing systems. Cognitive computing targets at extracting knowledge from all kinds of data sources and applies powerful big data and analytics algorithms to help decision making and to create value. In this context, large scale machine learning and pattern recognition hold central role. Advances in algorithms as well as in computing architectures are much in need in order to achieve the full potential of cognitive computing. We will discuss changing computing paradigms and algorithmic frontiers and we will provide practical examples from recent state of the art cognitive solutions in key areas such as novel materials design.


Thierry Deutsch (CEA, France)

Linear scaling method with Daubechies Wavelets for Electronic Structure Calculation

joint work with L. Genovese (CEA), S. Mohr (CASE Group, BSC), L. Ratcliff (ALCF, ANL), S. Goedecker (Basel University)

Since 2008, the BigDFT project consortium has developed an ab initio Density Functional Theory code based on Daubechies wavelets.  These are a compact support multiresolution basis, optimal for expanding localised information, and form one of the few examples of systematic real space basis sets.  In recent articles, we presented the linear scaling version of BigDFT code [1], where a minimal set of localized support functions is optimized in situ.  Our linear scaling approach is able to generate support functions for systems in various boundary conditions, like surfaces geometries or system with a net charge.
The real space description provided in this way allows to build an efficient, clean method to treat systems in complex environments,  and it is based on a algorithm which is universally applicable [2], requiring only moderate amount of computing resources.

In this talk, we will present the linear scaling approach based on massively parallel algorithms and how the flexibility of this approach is helpful in providing a basis set that is optimally tuned to the chemical environment surrounding each atom.

[1] J. Chem. Phys. 140, 204110 (2014)
[2] Phys. Chem. Chem. Phys., 2015, 17, 31360-31370


Jack Dongarra (University of Tennessee, Oak Ridge National Laboratory, and University of Manchester)


Current Trends High-Performance Computing and Challenges for the Future


In this talk we examine how high performance computing has changed over the last 10-year and look toward the future in terms of trends. These changes have had and will continue to have a major impact on our numerical scientific software. A new generation of software libraries and algorithms are needed for the effective and reliable use of (wide area) dynamic, distributed and parallel environments.  Some of the software and algorithm challenges have already been encountered, such as management of communication and memory hierarchies through a combination of compile--time and run--time techniques, but the increased scale of computation, depth of memory hierarchies, range of latencies, and increased run--time environment variability will make these problems much harder.


Online user: 1