Date de début:
09:00
Date de fin:
09:20
Lieu:
amphithéatre Pierre Gilles De Gennes - APC
Ville:
Paris
Producteur:
-

Durée:
19:42
Type:
video/mp4
Poids:
183.55 Mo
Format:
mp4
Résolution:
1280x720
Codec:
-

An EIM-based compression-extrapolation tool for efficient treatment of homogenized cross-section data

Nuclear reactor simulators implementing the widespread two-steps deterministic calculation scheme tend to produce a large volume of intermediate data at the interface of their two subcodes – up to dozens or even hundred of gigabytes – which can be so cumbersome that it hinders the global performance of the code. The vast majority of this data consists of ”few-groups homogenized cross-sections”, nuclear quantities stored in the form of tabulated mutivariate functions which can be precomputed to a large extent.

It has been noticed in the past that few-groups homogenized cross-sections are highly redundant - that is, they exhibit strong correlations, which paves the way for the use of compression techniques. We here leverage this idea by introducing a new coupled compression/surrogate modeling tool based on the Empirical Inter-
polation Method, an algorithm originally developed in the framework of partial differential equations. This EIM-compression method is based on the infinite norm, and proceeds in a greedy manner by iteratively trying to approximate the data and incorporating the chunks of information which cause the largest error. In the process, it generates a vector basis and a set of interpolation points, which provide an elementary surrogate model that can be used to approximate future data from little information. The algorithm is also very suitable for parallelization and out-of-core computation (processing of data too large for the computer RAM) and very easy to apprehend and implement.

This methodology enables us to both efficiently compress cross-sections and spare a large fraction of the required physics calculations. We investigate its performance on heavy realistic nuclear data replicating a notorious benchmark. Compression loss, memory savings and speed are analyzed both from a data-centric point of view in the perspective of applications in neutronics, and by comparison with an existing and widely-used method – stochastic truncated SVD – to assess mathematical efficiency. We discuss the usage of our surrogate model and its sensitivity to the choice of the training set. The method is shown to be competitive in terms of accuracy and speed, provide important memory savings and spare a large amount of physics code computation.

Olivier Truffinet

Dernières vidéos