Algebraic Multigrid

Conventional iterative solution algorithms such as Jacobi, Gauss-Seidel, or ILU (Incomplete Lower-Upper) converge significantly slower with increasing mesh sizes. This slow convergence in turn leads to a quadratic increase in computational time. To accelerate solver convergence, Simcenter STAR-CCM+ employs the Algebraic Multigrid (AMG) method. The concept of multigrid methods is based on the fact that an iterative solution algorithm reduces efficiently the numerical error components whose wave lengths correspond to the cell size (high-frequency errors). The long-wavelength (low-frequency) errors, however, are reduced rather slowly with such a method.

Multigrid methods reduce the low-frequency errors through an iterative process on a hierarchy of successively coarsened linear systems. Algebraic multigrid derives a coarse level system without reference to the underlying grid geometry. The coarse-grid equations are derived from arithmetic combinations of the fine-grid coefficients. After several iterations, a multigrid algorithm transfers the computation from a fine linear system to a coarse linear system. These iterations are also called smoothing iterations, because the error function is then smooth, that is, free of high-frequency components of the error. Due to the transfer of the solution process to a coarser linear system, the errors are now higher-frequency with respect to the cell size and can be reduced efficiently. To reduce the error of the fine linear system solution on a coarser linear system, a defect equation is defined.

Multigrid algorithms apply the following steps:

  • Agglomerate cells to form coarse grid levels.
  • Transfer the residual from a fine level to a a coarser level (known as restriction).
  • Transfer the correction from a coarse level back to a finer level (known as prolongation).

Simcenter STAR-CCM+ provides two methods for accelerating the linear solver, the preconditioned conjugate-gradient method for incompressible flows and the bi-conjugate gradient stabilized method for all others.

Multigrid Cycling

Multigrid techniques enable a basic iteration scheme such as Gauss-Seidel to be greatly accelerated by using simple correction sweeps on a sequence of coarse meshes. The strategy for visiting the coarse levels can have a significant impact on the efficiency of the algorithm. Two cycling strategies are available in the AMG solver that is used in Simcenter STAR-CCM+—fixed and flexible:

  • Fixed cycles—A complete multigrid cycle represents the recursive application of a single cycle that is composed of the following steps:
    1. (Pre)smooth
    2. Restrict
    3. Cycle anew
    4. Prolongate
    5. (Post)smooth

    These steps are applied to a sequence of successively coarser grids or equation sets. Smooth means to apply any number of iterative relaxation sweeps to the equations on the current fine level to compute a new set of corrections. Restrict is to transfer the existing residuals down to the next coarsest level where a new cycle is applied. Following that the resultant corrections are prolongated—transferred back up to the current fine level—where smoothing is again applied.

    The three types of fixed cycles are:

    • F fixed cycles
    • V fixed cycles
    • W fixed cycles
  • Flexible cycles—This type of cycle is a more economical cycling strategy for linear systems that are not stiff. Rather than using all multigrid levels in a regular pattern, the residuals are monitored after every sweep on a given grid level. If the residual rate of reduction exceeds a given threshold, the solution continues on a coarser level. If the residual on a given level is reduced more than a specified tolerance, the solution moves to a finer level. A limit is further imposed on the number of sweeps that are allowed at any level.
F Cycle

The F cycle is a variant of the W cycle. Illustrated below, this cycle involves fewer coarse-level sweeps than the W cycle, but still more than the V cycle.



V Cycle

The V cycle is the simplest type of fixed cycle, and only has two legs. In the first leg, one performs a number of relaxation sweeps on the finest level and transfers the residuals to the next level. Then, the operation is repeated on successively coarse levels until the coarsest level is reached. A coarse “grid” generally contains only a few “cells”. After finishing the sweeps on the coarsest level, the solution is used to correct the solution on the next finer level. Some relaxation sweeps are performed on that level before repeating the process until the finest level is reached. This procedure is illustrated below.



W Cycle

For stiff systems, the V cycle sometimes is not sufficient, and more coarse iterations are advantageous. The W cycle increases the number of coarse relaxation sweeps as illustrated below.



AMG Solver

The result of the discretization approach that is described above is a linear system:

Figure 1. EQUATION_DISPLAY
A x  = b
(977)

representing the algebraic equations for each computational cell. The matrix A represents the coefficients of the linear system (for example, the coefficients a p and a n on the left-hand side of Eqn. (920), the vector x represents the unknowns ( Δ ϕ in Eqn. (920)) in each cell, and the vector b represents the residuals (see Eqn. (921)) from each cell.

The resultant linear system of equations is solved using either the Jacobi, the Gauss-Seidel, or the Incomplete Lower Upper decomposition relaxation scheme (smoother).

Jacobi and Gauss-Seidel

The general principle behind iterative methods is that, given an approximate solution x k , a better approximation x k + 1 is sought, and then the process is repeated.

If the error at iteration k is defined as:

Figure 2. EQUATION_DISPLAY
e k = x - x k
(978)

where x represents the exact solution and the residual as:

Figure 3. EQUATION_DISPLAY
r k = b - Ax k
(979)

it follows that:

Figure 4. EQUATION_DISPLAY
A e k = r k
(980)

Therefore, continuing the iteration until the residual is driven to a small value, drives the error a small value.

The most basic iterative methods are Jacobi and Gauss-Seidel iteration. These methods involve visiting each cell in sequence, and updating the value of x i in each cell i using the coefficients of its n neighbor cells as follows:

Figure 5. EQUATION_DISPLAY
xi=1Ai,i( b-neighbors n Ai,nxn)
(981)

The difference between Jacobi and Gauss-Seidel iteration is subtle: Jacobi uses the “old” values of x n , while Gauss-Seidel uses the available values that have been updated, resulting in better convergence.

Incomplete Lower Upper (ILU)

The ILU (Incomplete Lower-Upper) method, Saad [283], solves iteratively for x k + 1 using:

Figure 6. EQUATION_DISPLAY
A~(xk+1-xk)=b-Axk
(982)

where

Figure 7. EQUATION_DISPLAY
A~=LU=(D+LA)(I+D-1UA)
(983)

where I is the identity matrix, L A and U A are the lower and upper triangular matrices of the original matrix A , and D is a diagonal matrix that is calculated using the MILU(0) method.

The ILU method is more computationally expensive but more robust than the Gauss-Seidel method. The robustness of the ILU method means that it can handle group sizes of 8 or more in three dimensions; the Gauss-Seidel method can only handle group sizes of 4. Use the ILU method for complex problems, and to improve convergence and parallel scalability. For three-dimensional problems, use it with a group size of 8.

Acceleration Methods

Preconditioned Conjugate-Gradient Method

To accelerate the convergence of the iterative solution of the linear system in the Pressure solver, a conjugate-gradient method is available that uses the AMG solver as the preconditioner. Use this method for incompressible flows (Constant Density model) with the Segregated Flow model.

To solve the linear system A x  = b , the preconditioned conjugate-gradient (PCG) takes the following form:

Figure 8. EQUATION_DISPLAY
r 0 = b - Ax 0
(984)
Figure 9. EQUATION_DISPLAY
z 0 AMG fixed cycle  ( A , r 0 )
(985)
Figure 10. EQUATION_DISPLAY
p 0 = z 0
(986)
Figure 11. EQUATION_DISPLAY
k = 0
(987)

Each iteration involves the calculations in Eqn. (988) through Eqn. (994):

Figure 12. EQUATION_DISPLAY
α k = ( r k T z k ) ( p k T A p k )
(988)
Figure 13. EQUATION_DISPLAY
x k + 1 = x k + α k p k
(989)
Figure 14. EQUATION_DISPLAY
r k + 1 = r k - α k A p k
(990)

If | r k + 1 | < ε , then the iteration ends.

Figure 15. EQUATION_DISPLAY
z k + 1 AMG fixed cycle  ( A , r k + 1 )
(991)
Figure 16. EQUATION_DISPLAY
β k = ( r k + z k + 1 ) ( r k z k )
(992)
Figure 17. EQUATION_DISPLAY
p k + 1 = z k + 1 + β k r k
(993)
Figure 18. EQUATION_DISPLAY
k = k + 1
(994)

The solution of the linear system is x k + 1 .

A necessary condition for the convergence of the PCG algorithm is that the matrix A (which here is also the preconditioner) is symmetric positive-definite (SPD). The matrix of the linear system of the Pressure solver in pure incompressible cases satisfies this condition.

Eqn. (985) and Eqn. (991) in the iteration process are preconditioner solver calculations, which involve one AMG fixed-cycle loop to approximate the auxiliary linear systems:

Figure 19. EQUATION_DISPLAY
Az k = r k            k = 0 , 1 ,
(995)

As implemented, the PCG accelerates convergence and improves the parallel scalability of the Pressure solver. For normal cases (no non-linear instability), it is expected that the PCG does not change significantly the overall non-linear convergence of Simcenter STAR-CCM+. The PCG can provide robust convergence for those cases where numerical instability results from a lack of incomplete convergence of the linear system of the Pressure solver.

Bi-Conjugate Gradient Stabilized Method

The Bi-Conjugate Gradient Stabilized (BiCGStab) method has been developed to solve nonsymmetric (general) linear systems. Simcenter STAR-CCM+ implements an AMG-preconditioned BiCGStab algorithm to improve the robustness and the speed of convergence of linear systems that the solvers produce.

To solve the general linear system A x  = b , the preconditioned BiCGStab takes the following steps:

Figure 20. EQUATION_DISPLAY
r 0 = b - Ax 0
(996)
Figure 21. EQUATION_DISPLAY
r ~ = r 0
(997)
Figure 22. EQUATION_DISPLAY
i = 0
(998)

Each iteration i involves calculations of Eqn. (999) through Eqn. (1012):

Figure 23. EQUATION_DISPLAY
ρ i - 1 = r ~ T r i - 1
(999)

If ρ i - 1 = 0 , the method fails.

If i = 1 :

Figure 24. EQUATION_DISPLAY
p i = r i - 1
(1000)

Otherwise, as per Eqn. (1001) and Eqn. (1002):

Figure 25. EQUATION_DISPLAY
β i - 1 = ( ρ i - 1 ρ i - 2 ) ( α i - 1 ω i - 1 )
(1001)
Figure 26. EQUATION_DISPLAY
p i = r i - 1 + β i - 1 ( p i - 1 - ω i - 1 v i - 1 )
(1002)
Figure 27. EQUATION_DISPLAY
p ˆ = AMG fixed cycle  ( A , p i )
(1003)
Figure 28. EQUATION_DISPLAY
v i = A p ˆ
(1004)
Figure 29. EQUATION_DISPLAY
α i = ρ i - 1 ( r ~ T v i )
(1005)
Figure 30. EQUATION_DISPLAY
S = r i - 1 - ( α i v i )
(1006)

If ( | S | < ε ) , then as per Eqn. (1007):

Figure 31. EQUATION_DISPLAY
x i = x i - 1 + α i p ˆ
(1007)

and the iteration ends.

Otherwise the iteration continues as follows:

Figure 32. EQUATION_DISPLAY
S ˆ = AMG fixed cycle  ( A , S )
(1008)
Figure 33. EQUATION_DISPLAY
t = A S ˆ
(1009)
Figure 34. EQUATION_DISPLAY
ω i = t T S t T t
(1010)
Figure 35. EQUATION_DISPLAY
x i = x i - 1 + α i p ˆ + ω i S
(1011)
Figure 36. EQUATION_DISPLAY
r i = S - ω i t
(1012)

If | r i | < ε , then the iteration ends.

If ω i 0 , then the iteration continues.

The solution of the linear system is x i .

Eqn. (1003) and Eqn. (1008) in the iteration process are the preconditioner solver calculations, which involve one AMG fixed-cycle loop to obtain an approximate solution of:

Figure 37. EQUATION_DISPLAY
A p ˆ = p i         and         A S ˆ = S         respectively
(1013)

As with the Preconditioned Conjugate-Gradient method, BiCGStab can improve parallel scalability and improve robustness for those cases where numerical instability was due to lack of sufficient convergence of the linear system.

Preconditioned Generalized Minimal Residual Method
Similarly to the Preconditioned Bi-Conjugate Gradient Stabilized method, the Generalized Minimal Residual Method (GMRes) method can solve general, not necessarily symmetric, linear systems. Simcenter STAR-CCM+ implements an AMG-preconditioned GMRes algorithm to improve the robustness and the speed of convergence of the linear systems that the solvers produce, following [283], chapter 9.3.1.
Note that, compared to the Preconditioned Conjugate-Gradient and Bi-Conjugate Gradient Stabilized methods, the preconditioned GMRes method has increased memory requirements.