This is a cache of https://www.uibk.ac.at/projects/dp-docc/dissemination/blog/alexander_moriggl_rexi.html. It is a snapshot of the page at 2024-11-28T21:30:08.055+0100.
A reliable highly parallelizable exponential integrator

A reliable highly parallelizable exponential integrator

Calculating the matrix exponential seems at first sight, quite abstract, but doing this is often very useful for numerical simulations. A short overview of a recently developed and improved method.

By Alexander Moriggl

When doing simulations, we rely on computers to do the work for us. Calculus helps us explain how something evolves in time, however computers only understand numbers and how to add or subtract them. For this reason numerical methods are employed which transforms calculus into simple addition and subtraction operations. Exponential integrators form one such class of numerical methods.

Most of the real world problems can be modeled as partial differential equations (PDEs), which are equations that depend on the rate of change of physical quantities (as for example pressure, temperature, ... ) in space and time. These equations can very often not be solved by hand, therefore they are simulated numerically. A common strategy is to look for the solution only at (many) fixed points in space. This approach reduces the problem to solving a large system of equations that depends only on the evolution in time. This writes in its simplest form as u'(t)=Au(t), where A is in general a large matrix defined by the physical model and where u'(t) tells us how the sought-after function u(t) (the physical quantity at fixed points in space) changes over time. The exact solution of this simplified formulation of the problem is u(t) = etAu(0), where u(0) is the starting point of the simulation. In this article I present the method REXI, which stands for Rational EXponential Integrator, a recently developed algorithm to compute etAu(0) for matrices A which often arise for hyperbolic or highly oscillatory PDEs. Examples of such PDEs are the wave equation, which models the propagation of water waves, sound waves or seismic waves, and the Schrödinger equation, which is used to study and predict quantum mechanical systems. Furthermore, I describe the process of how we improved this method.

Almost every matrix A can be decomposed into A = VDV-1 where D is a matrix where only the entries on the diagonal are different from zero. The set of all the complex numbers of the diagonal of D is called the spectrum. We are interested to compute the matrix exponential eA, where the spectrum of A is purely imaginary. It can be shown that eA = VeDV-1, and therefore the problem can be reduced to evaluate eD. Computing the exponential of a diagonal matrix can be done without any effort since this means to evaluate the exponential of scalars, which in this case are purely imaginary numbers considering the purely imaginary spectrum of interest. However, in most cases it is too expensive to decompose into V and D and thus this strategy can not be applied in practice, but this gives rise to develop new methods in computing the matrix exponential by starting to derive methods to compute the scalar exponential and afterwards plugging in the matrix as argument.

Thus the first step in deriving the REXI method is to develop an approximation of eix , where i is the complex identity and x a real number since, as already mentioned, REXI computes the matrix exponential for matrices with a purely imaginary spectrum. To approximate eix, a basis is needed, which is a collection of elementary building blocks to construct the approximation. The properties of the method depend on the basis and therefore, since there exist a lot of different bases in literature, choosing an appropriate basis is quite important.

Since at the end the computer is doing the work for us, new developed methods should be able to exploit the capabilities of them. Modern computers, and even mobile phones, have nowadays multiple computational cores. For example, if you buy a new smartphone, you will certainly hear the terms dual-core, quad-core or even octa-core, which means that 2,4 or 8 computational cores are used by the device. These cores run in parallel to complete the requested task in less time that a single core would need. The most powerful computers on earth, so called supercomputers, have millions of computational cores. For example, the currently strongest supercomputer Fugaku has more than 7 millions of such cores. The REXI method is designed to run on such supercomputers.

As basis a collection of shifted Gaussian functions, e(-(x-shift)*(x-shift)), is used which has good approximation properties in this specific case and allows to write the approximation as a sum where all the terms are independent of each other. This gives the requested property to exploit computer systems having many cores since each core will take care of one term in the sum. However, the Gaussian functions themselves are not useful at all for computers (approximating eix using functions of type e-x*x), therefore they are approximated with rational functions. The REXI method can be imagined as follows

rexif

where Re( . ) is the real part of a complex number and where the b, β and λ are just coefficients of the method itself. Let me first mention that the accuracy of REXI depends on two parameters, h and M, which are defined by the selected basis. With this method, the approximation error in the scalar case is close to machine precision, and a precise relation between accuracy, x, h and M can be derived, as can be seen in the figures below (by scaling down h and increasing M the error becomes smaller, but then there are more terms in the sum above which implies more work for the computer).

scalar1.png

scalar2.png

 

Thus, as described above, one would expect the same behavior in the error when we plug in the matrix A instead of ix in the above formula. However, this is not the case. After approximately a month I was able to figure out why the results were not as reliable as expected and I fixed the problem. Thus we end up with a improved version, which we called REXII.

How the improved results are reached

Initially, my supervisor told me to apply a backward error analysis to this method in the matrix case, which determines the parameters of the method (h and M) to reach a certain accuracy. To do this, we implemented the method and we run the code with random matrices to get a feeling of the parameters. While doing this task, we discovered that the error of the method for two different types of random matrices behaved quite different. At first we assumed that some internal computations with complex numbers are causing these results, and we proceed on working on the backward error analysis. However, since I was not able to produce something useful and was quite desperate at that time not being able to do that task, I started to analyze why I obtain such different results for these two classes of matrices. I figured out that the operation which takes the real part in the original description causes these problems. By reformulating everything and avoiding this operation, the expected error was always obtained. This change gives much better results then the original method, as can be seen in the figures below. The errorplot on the left-hand side is obtained with the original version REXI, while the errorplot on the right-hand side is obtained with the improved version REXII. This new numerical algorithm is able to determine the values for h and M as in the scalar case to obtain an extremely high accurate approximation, thus at the end in some sense I succeeded in doing an error analysis. This relation is described by the red line in the graph. Moreover a much smaller M for REXII is required to obtain accurate results, which implies that much less work has to be done.

matrixOrig

matrixCorrect

To conclude, let me emphasize that research is not always straightforward. Often scientific results are discovered by accident, as also in this case.
For more details about this method, I refer to the official publication.

Nach oben scrollen