Remap is the transfer of numerical fields from a computational domain to another. It is said to be conservative when some extensive quantity is preserved during this transfer. For instance, one may want to preserve the material mass while remapping the density field defined on a source mesh to a target mesh. Remap is necessary for:
Conservative schemes have long been of particular interest for ALE simulations . In such simulations, the mesh is allowed to evolve in time along with the material such as depicted in Figure 1. In that case, the mesh is smoothed to prevent cells distorting or tangling, and all fields computed on the old mesh are remapped to the new one. Remap schemes such as advection-based and intersection-based remaps  are often integrated within ALE hydrodynamics codes such as FLAG . They are also useful for other multi-physics applications such as Amanzi  (to assimilate scattered input data from observation sources), and for code-to-code linking problems such as in Ingen . However, this tight integration has led to a proliferation of remap schemes that cannot be easily shared between simulation codes. To address this issue, standalone conservative remap software has been developed such as the closed source Overlink  from Lawrence Livermore National Laboratory, the legacy REMAP3D code  from Los Alamos National Laboratory, or the globally conservative open-source DTK code  from Oak Ridge National Laboratory.
Portage is currently the only actively developed open source library that performs locally conservative remap. It provides a lightweight and extensible interface that can easily be customized and integrated into simulation codes. Portage supports general polyhedral mesh fields remap up to a second-order accuracy, while preserving integral quantities of interest and numerical bounds. It supports remap between particle fields as well, and provides means to perform mesh remap using the particle remap engine. Portage is designed to scale to thousands of cores on distributed architectures through MPI and OpenMP (using Nvidia’s Thrust wrapper).
Portage supports three types of remap:
Each step can be processed in parallel with the granularity of a single point or cell.
Portage has a modular design. It relies on extensive C++ templating of all remap steps, allowing client codes to extend, adapt or replace them by customized ones. Besides, most of its core methods are designed to have no side-effects to ease their parallelization and their individual reuse. Portage’s components and their interactions are given in Figure 4.
Portage takes the source and target domains along with fields data as inputs, and then outputs remapped fields on the target domain. Here a domain can be a mesh or a point cloud. For multi-material fields, it requires the material volume fractions on the source domain as depicted in Figure 5, and which corresponds to the proportion of each material on each cell. The remap workflow consists of six stages:
It is also possible to extrapolate values to empty cells in the target mesh.
A driver is the interface that exposes the remap capabilities to the client simulation code. Writing a driver allows client codes to mix, match, or extend specialized remap components for their particular needs. Portage comes with a few drivers to ease the design of custom ones and several apps to show common remap use cases. Each driver is templated on core components (interface reconstruction, search, weight computation, interpolation) for each remap method (intersection, advection, particle) and on mesh type. If the simulation code provides a mesh with a set of queries that conforms with the mesh wrapper interface, then no data recopy is involved. Portage embeds five built-in drivers:
A basic example of a single-material mesh remap using coredriver methods (using default parameters where possible) is given in Listing 1. A list of available options for remap components is given in Table 1. Each of them is templated on source and target domains as well as field entity types.
|SEARCH||SearchSimple||search using bounding-box (2D)|
|SearchKDTree||search using a k-d tree|
|SearchSimplePoints||basic quadratic search for particles|
|SearchPointsByCells||linear search for particles using a virtual cell|
|WEIGHTS||IntersectRnD||compute exact n-polytopes intersection|
|IntersectSweptFace||compute moments of advected regions|
|Accumulate||evaluate and sum shape functions/derivatives|
|options: shape kernels and geometry, basis, estimators|
|INTERPOLATE||Interpolate_1stOrder||first-order interpolation of mesh values|
|Interpolate_2ndOrder||second-order interpolation with limiters|
|Estimate||n-order approximation of particle values|
Portage is designed for high performance computing clusters. It relies on both MPI and OpenMP to leverage the hybrid parallelism exposed by such architectures. Here we present some scaling results on a simple multi-material problem in Figure 6. Tests are run on a cluster formed by 256 dual-socket nodes (Intel Broadwell with 18 cores per-socket at 2.1 Ghz). Here we consider a cell-centered three-material field remap with 3D cartesian grids and a simple t-junction material distribution on the domain. The source and target grids have 403 and 1203 cells respectively. To ease memory pressure, we set a single MPI rank per node and 16 threads per rank explicitly pinned on cores using KMP_AFFINITY=granularity=core,compact.
The total execution time and the remap time are depicted in black and red respectively. The time spent on material interface reconstruction – which is only performed on multi-material cells – is shown in purple. Here, the workload per rank is impacted by the uneven distribution of multi-material cells. Despite the workload imbalance, a reasonable scaling is still achieved.
Portage is tested on Linux with Gnu and Intel compilers. It provides over 200 unit and functional tests as part of a Travis continuous integration setup using the Github workflow. In particular, they ensure that remap algorithms:
The code coverage in the latest release is 67% as shown in Figure 7.
Portage is designed for high performance computing clusters. Hence it is primarily targetted to Linux.
Portage is written in C++14.
All contributors are or were affiliated with Los Alamos National Laboratory.
Persistent identifier: https://github.com/laristra/portage/releases
Publisher: Angela Herring
Version published: 2.2.3
Date published: 27/04/2020
Date published: 01/09/2017
Support: We will use the GitHub “issues” feature as well as email (firstname.lastname@example.org) to maintainers for support.
Portage is an extensible and mesh-agnostic library. Its unique design allows it to be re-used in a variety of applications such as:
Portage is actively developed, supported and continuously released. Bugs and feature requests can be notified using the issue tracker on Github, as well as any question related to the software. User support may be reached by email at email@example.com. We welcome community contributions through pull requests.
We thank Benjamin Bergen, Brian Jean, Christoph Junghans, Davis Herring, Marc Charest and Mikhail Shashkov from Los Alamos National Laboratory for their help.
This work is supported by the U.S. Department of Energy (DoE) through Los Alamos National Laboratory which is operated by Triad National Security, LLC for the National Nuclear Security Administration of the DoE under contract 89233218CNA000001.
The authors have no competing interests to declare.
Hirt C, Amsden A, Cook JL. An arbitrary Lagrangian-Eulerian computing method for all flow speeds. Journal of Computational Physics. 1974; 14(3): 227–253. DOI: https://doi.org/10.1016/0021-9991(74)90051-5
Barlow A, Maire P-H, Rider W, Rieben R, Shashkov M. Arbitrary Lagrangian-Eulerian Methods for Modeling High-Speed Compressible Multimaterial Flows. Journal of Computational Physics. 2016; 322: 603–665. DOI: https://doi.org/10.1016/j.jcp.2016.07.001
Painter S, Coon E, Atchley A, Berndt M, Garimella R, Moulton D, Svyatskiy D, Wilson C. Integrated surface/subsurface permafrost thermal hydrology: Model formulation and proof-of-concept simulations. Water Resources Research. 2016; 52. DOI: https://doi.org/10.1002/2015WR018427
Grandy J. Conservative Remapping and Region Overlays by Inter-secting Arbitrary Polyhedra. Journal of Computational Physics. 1999; 148(2): 433–466. DOI: https://doi.org/10.1006/jcph.1998.6125
Slattery SR, Wilson PPH, Pawlowski RP. The Data Transfer Kit: A Geometric Rendezvous-Based Tool for Multiphysics Data Transfer. International Conference on Maths and Comput. Methods Applied to Nuclear Science & Engineering. 2013; 5–9.
Zalesak S. Fully multidimensional ux-corrected transport algorithms for uids. Journal of Computational Physics. 1979; 31(3): 335–362. DOI: https://doi.org/10.1016/0021-9991(79)90051-2
Barth T, Jespersen D. The design and application of upwind schemes on unstructured meshes. 27th Aerospace Sciences Meeting; 1989. DOI: https://doi.org/10.2514/6.1989-366