We present in-viscid and viscous simulations of a Kelvin-Helmholtz instability using second a order accurate monotoniced-central finite volume (FV) method and a fourth order accurate discontinuous Galerkin (DG) method. The initial condition is given by [1]:
with $$B=\tanh \left( 15 y + 7.5 \right) – \tanh(15y-7.5).$$
We first present the FV results at end time $t=3.7$, which are computed using a monotoniced-central second order discretization of the Euler equations of gas dynamics on uniform grids.
The next results use a fourth order DG discretization of the Navier-Stokes equations on uniform grids using $Re=320.000$ at end time $t=3.7$. The highest resolution (4096² DOFs) is a direct numerical simulation (DNS) of the problem, where all scales are resolved.
It is remarkable that the numerical dissipation of the second order FV scheme causes the in-viscid simulation with 2048² DOFs to look very similar to the viscous DNS solution at $Re=320.000$.
[1] A.M. Rueda-Ramírez, G.J Gassner (2021). A Subcell Finite Volume Positivity-Preserving Limiter for DGSEM Discretizations of the Euler Equations. https://arxiv.org/pdf/2102.06017.pdf
We are interested in the accuracy of the finite volume scheme, on a LGL subcell grid induced by a LGL-DGSEM discretization. For comparison, we look at a Kelvin-Helmholtz-Instability test problem, simulated with 4² LGL nodes per element and 32² elements, i.e. 128² spatial degrees of freedom. The high-order DGSEM uses the entropy-conserving split-form powered by the flux of Chandrashekar in the volume, and the HLLC flux at the element interfaces. Positivity is controlled by adding subcell FV where the positivity is not fulfilled with the amount needed. All FV discretizations use the HLLC flux at the element interfaces and at the subcell interfaces as well.
The first results show the high-order DGSEM result, which is formally 4th order accurate for smooth problems:
The next result uses a subcell finite volume approximation on the LGL-subcell grid, directly, i.e. without spatial reconstruction (piecewise constant approximation). The result is very dissipative:
The last results show the improvement when using a piecewise linear reconstruction with monotonized-central slope limiter on the LGL-subcell grid. The reconstruction is inner-element only, as it does not use element neighbor information. Thus, at the very first and very last LGL subcell, no reconstruction is used. Still, the accuracy is recovered nicely by this simple modification of the subcell FV scheme:
We compare the runtime performance of four different magnetohydrodynamics codes on a single compute node on the in-house high performance cluster ODIN. A compute node on ODIN consists of 16 cores. We run the ‘3D Alfven wave’ test case up to a fixed simulation time and measure the elapsed wall clock time of each code minus initialization time and input/output operations. For each run the number of cores is successively increased. This allows us to get insights into the scaling behavior (speedup gain wih increasing number of cores) on a single compute node. Furthermore we plot the performance index (PID) over the number of nodes which is a measure of throughput, i.e. how many millions of data points (degrees-of-freedom/DOF) per second are processed by the each code.
The four contestants are:
Flash [1] with Paramesh 4.0 and the Five-wave Bouchut Finite-Volume solver written in Fortran
Fluxo [2] an MHD Discontinuous Galerkin Spectral Element code written in Fortran
Trixi [3] an MHD Discontinuous Galerkin Spectral Element code written in Julia
Nemo an in-house prototype code providing a 2nd order monotonized-central MHD-FV scheme (MCFV) and a hybrid MHD Discontinuous Galerkin Spectral Element / Finite Volume scheme (DGFV) written in Fortran.
Trixi uses a thread-based parallelization approach while Flash, Fluxo and Nemo
use the standard MPI method. Furthermore, Nemo also provides OpenMP-based
parallelization.
Simulation of the Shock-Bubble Interaction Problem with a hybrid entropy-stable DG/FV method and Adaptive Mesh Refinement (AMR) using Trixi (https://github.com/trixi-framework/Tr…).
The simulation uses a multicomponent model and computes the spatial operator as a blend of a first-order subcell FV method and a fourth-order DG scheme on a squared domain.
Simulation of the Orszag-Tang vortex test with a hybrid entropy-stable DG/FV method and Adaptive Mesh Refinement (AMR) using FLUXO (https://github.com/project-fluxo/fluxo).
The simulation uses the GLM-MHD model to control div(B) errors and computes the spatial operator as a blend of a first-order subcell FV method and a fourth-order DG scheme.
The initial grid has 16×16 elements (64² DOFs), and the maximum resolution is achieved with three refinement levels (equivalent to 512² DOFs).
We present a 2D Riemann problem (configuration 3) computed with a hybrid Discontinuous Galerkin / Finite Volume scheme on a uniform grid with 1024^2 degrees of freedom.
In this hybrid approach we are blending an 8th order, 4th order and 2nd order DG as well as a 1st order FV scheme.
One of the challenges when simulating astrophysical flows with self-gravity is to compute the gravitational forces. In contrast to the hyperbolic compressible Euler equations, the gravity field is described by an elliptic Poisson equation. In [1], we present a purely hyperbolic approach by reformulating the elliptic problem into a hyperbolic diffusion problem, which is solved in pseudotime using the same explicit high-order discontinuous Galerkin method we use for the flow solution.
We start with the Poisson equation for the Newtonian gravitational potential $\phi$,
$$-\vec{\nabla}^2\phi = -4\pi G \rho\label{grav}$$
where G is the universal gravitational constant and $\rho$ is the mass density. Following Nishikawa’s work, we convert the Poisson equation for the gravitational potential into the hyperbolic gravity equations,
where the auxiliary variables $(q_1,q_2)^\intercal\approx\vec{\nabla}\phi$ and $T_r$ is the relaxation time. The steady-state solution of the hyperbolic system is, in fact, the desired solution of the original Poisson problem.
The flow and the gravity solvers operate on a joint hierarchical Cartesian mesh and are two-way coupled in each Runge-Kutta stage via the source terms. A key benefit of our strategy is that it allows the reuse of existing explicit hyperbolic solvers without modifications, while retaining their advanced features such as non-conforming and solution-adaptive grids.
We implemented this purely hyperbolic approach for self-gravitating gas dynamics in Trixi.jl, a tree-based numerical simulation framework for hyperbolic PDEs, which is written in Julia [2]. It was first validated by computing the Jeans gravitational instability, which demonstrates excellent agreement of the numerical results with the analytical solution:
Evolution of the computed kinetic, internal, and potential energies for the Jeans instability using polynomial order N = 3 on a uniform 16 × 16 mesh. The analytical energies are included for reference.
In a second example, we consider a modification of the Sedov blast wave problem that incorporates the effects of gravitational acceleration and which involves strong shocks and complex fluid interactions. We include it to demonstrate the shock capturing and AMR capabilities of Trixi.jl to resolve the cylindrical Sedov blast wave:
Self-gravitating Sedov explosion approximated with polynomial order $N = 3$ on an AMR grid with four levels of possible refinement at $T = 0.5$ (left) and $T = 1.0$ (right). (first and third row) Two dimensional plots of the density and gravitational potential at two times. The white overlay of squares in the density plots shows the AMR grids used by both solvers. (second and last row) One dimensional slices of the solutions along a line from the origin to the edge of the domain in the positive x-direction.
Finally, we show the evolution of the solution together with the adaptive mesh, which is dynamically refined and coarsened to maintain high accuracy while reducing the overall computation time:
Simulation of self-gravitating Sedov blast wave with adaptive mesh refinement, where adaptation performed after every time step of the compressible Euler solver. The total run time compared to a uniformly refined mesh is reduced by a factor of 19, while the solutions are virtually unchanged.
All results in the paper can be easily reproduced by using the resources provided in [3], including step-by-step instructions for how to obtain and install Trixi.jl.
References
[1] Schlottke-Lakemper, Michael and Winters, Andrew R and Ranocha, Hendrik and Gassner, Gregor J, A purely hyperbolic discontinuous Galerkin approach for self-gravitating gas dynamics, submitted to J Comp Phys, 2020. arXiv:2008.10593
[2] Schlottke-Lakemper, Michael and Gassner, Gregor J and Ranocha, Hendrik and Winters, Andrew R, Trixi.jl: A tree-based numerical simulation framework for hyperbolic PDEs written in Julia, 2020. doi:10.5281/zenodo.3996439
[3] Schlottke-Lakemper, Michael and Winters, Andrew R and Ranocha, Hendrik and Gassner, Gregor J, Self-gravitating gas dynamics simulations with Trixi.jl, 2020. doi:10.5281/zenodo.3996575
A new approach for approximating conservation laws has been tested: instead of monitoring the changes of the means of a quantity within a certain volume over time like a finite volume procedure, this method mimics the behavior of simple solutions.
First the current state is split into waves w, each aligned along one of the eigenvectors of the Fluxjacobian and living on its own grid. Then the corresponding grids are being moved with their eigenvelocity and finally all waves w are being overlayed and normalized yielding the solution after one timestep.
The plots show the solution of isothermal Euler equations at Time T=0.0000 (initial condition) and T=0.0025 with N=40 gridpoints for each mesh.
A robust and easy way of simulating a hyperbolic test case with discontinuities is to use a 1st order finite volume scheme. Using such a method for magnetohydrodynamics (MHD) problems like the Orszag-Tang vortex leads to following results:
For this and the following examples we used a 4th order time integration scheme and 256 degrees of freedom in each spatial direction.
A way to generate more accurate results is to increase the order of the scheme, which has to be treated with caution near discontinuities because oscillations may occur. To overcome this issue one could use higher order schemes in smooth regions and lower order schemes in regions with discontinuities. An example for such an approach is our so called DGFV scheme, which blends e.g. a 4th order Discontinuous Galerkin scheme with a 1st order Finite Volume scheme.
Another way of doing it, is to use a suitable 4th order finite-volume scheme for MHD with a fitting limiter.
Variant of the classical Boss-Bodenheimer Test: We simulate the collapse and fragmentation of a rotating cloud core using a Discontinous Galerkin / Finite Volume hybrid high-order code. The self-gravity of the gas is calculated via the Fast-Multipole-Method.