Adjoint Topology Optimization Model

The topology optimization model determines the optimal distribution of material within a domain subject. The model is used in conjunction with the Topology Physics Model and the Adjoint model to perform topology optimization.

The model and its solver handles single objective optimization problems with constraints by using adjoint-based sensitivities to evolve a level set equation that defines the distribution of the material. When this model is enabled, it automatically calculates the material distribution used by the topology physics model. See also: Topology Physics Model.

The goal of the optimizer is to vary the material distribution χ such that the objective is minimized (or maximized, if that is the choice). The optimization is based on advancing a level set equation shown as follows:

Figure 1. EQUATION_DISPLAY
ϕ τ + F ϕ = S
(5124)

where τ is the pseudo time, F is the interface velocity between two phases, and S is the source term of the level set function.

ϕ is the level set variable, which varies between -1 (secondary phase—solid) and 1 (primary phase—fluid). The associated material distribution [ 0 , 1 ] is defined using the hyperbolic tangent function of the level set variable:

Figure 2. EQUATION_DISPLAY
χ = 0.5 × ( 1 + tanh ( ϕ δ ) )
(5125)

where δ controls the thickness of the interface and is set to 0.056 within the code.

The search direction for the optimization is calculated based on two quantities:
  • The topology derivative ( L χ ) , where L is the Lagrangian given by Eqn. (5132).
  • The ADAM (short for Adaptive Moment Estimation) update rule helps to avoid a local minimum (or maximum) by providing momentum to the direction from iteration to iteration. The formulation is shown as follows:
    Figure 3. EQUATION_DISPLAY
    m k + 1 = β 1 m k + ( 1 β 1 ) L χ
    (5126)
    Figure 4. EQUATION_DISPLAY
    ν k + 1 = β 2 ν k + ( 1 β 2 ) ( L χ ) 2
    (5127)
    Figure 5. EQUATION_DISPLAY
    v ¯ = m k + 1 ν k + 1 + ϵ
    (5128)

where the constant β 1 determines the amount of momentum applied to the search direction. Smaller values allow the optimizer to quickly change direction in response to gradient changes but also increase the likelihood of getting stuck prematurely in a local optimum. ϵ is a tiny value used to prevent division by zero.

The constant β 2 determines how quickly the step size should decay as the optimization progresses. Values near 1 will ensure the optimizer approaches the final result smoothly but also can slow down convergence.

The default values of 0.5 and 0.75 for β 1 and β 2 provide a reasonable compromise between these competing factors. These settings also provide some natural step size decay to ensure the optimizer takes smaller steps as the optimization progresses.

To assist in step size selection (effectively the increment in τ ), the velocity of the interface is equal to the search direction scaled by the volume to area ratio:
Figure 6. EQUATION_DISPLAY
F i = V i v i ¯ k | A k |
(5129)

where V i is the volume of the cell and k | A k | is the sum of the face areas of the cell.

The source term is also proportional to the search direction coming from ADAM. Its formulation is shown as follows:

Figure 7. EQUATION_DISPLAY
S i = ω ( 1 s i g n ( ν i ¯ ) ϕ ) ν ¯ i
(5130)

where ω is a user defined constant Source Strength. This term is omitted ( ω = 0 ) if the Allow Hole formation in the model is deactivated. In practical terms, when ω = 0 solid material must form on existing boundaries or initial portions of solid material in the domain. When ω > 0 , pockets of solid can appear anywhere in the optimization domain.

With these rules applied to the source term S i and interface velocity F i , the optimizer can use large step sizes to drive the optimization (default value is 10) independent of the problem scale or sensitivity values.

The topology optimization solver can handle constrained problems:
Figure 8. EQUATION_DISPLAY
min i m i z e f ( x ) s u b j e c t t o c i ( x ) 0 , 0 i m
(5131)

where f ( x ) is the adjoint objective, c i are the constraints, and m is the total number of constraints.

The constraints are handled with the Augmented Lagrangian Method which converts the constrained optimization problem into an unconstrained one. To perform the conversion, a Lagrangian function is introduced, where the original objective function is augmented to account for the constraints.

The augmented Lagrangian function is formulated as follows:
Figure 9. EQUATION_DISPLAY
L ( x , λ ) = f ( x ) + i ψ ( c i , λ i k , μ )
(5132)

where λ i are the Lagrange multipliers, and k denotes the optimization iteration.

Figure 10. EQUATION_DISPLAY
ψ i ( c i , λ i , μ ) = { λ i c i + μ 2 c i 2 i f c i 0 λ i c i o t h e r w i s e
(5133)

where μ is the penalty parameter. After each iteration, the Lagrange multiplier estimate is updated as:

Figure 11. EQUATION_DISPLAY
λ i k + 1 = max ( 0 , λ i k + μ c i )
(5134)

The augmented Lagrangian function above can also be extended to include equality constraints.

To ensure proper scaling of the Lagrangian, the objective and constraints can be normalized respectively by their own max sensitivity values:

Figure 12. EQUATION_DISPLAY
f ^ ( x ) = f ( x ) min ( 1 , 100 / d f d x max )
(5135)
Figure 13. EQUATION_DISPLAY
c ^ ( x ) = c ( x ) min ( 1 , 100 / d c d x max )
(5136)