Designing a Heat Transfer Optimization Study
In mathematics, optimization is the selection of the best element (with respect to some criteria) from a set of available alternatives. In the simplest case, an optimization problem consists of maximizing or minimizing a real function (that is, the objective function) by systematically choosing input values from within an allowed set and computing the value of the function.
More generally, optimization includes finding the “best available” values of some objective function given a defined domain, including a variety of different types of objective functions and constraints. Multi-objective optimization (that is, an optimization problem that has more than one objective function) adds significant complexity to optimization problems. It is not always possible to satisfy all of the objective functions, so a trade-off can be required. For example, you cannot minimize both the surface temperature and the surface heat flux. Optimization problems are also often multi-modal: they possess multiple good solutions that can all be globally good (that is, they have the same objective function value) or the solutions could be a mix of globally and locally good solutions. When classical optimization techniques are used to obtain multiple solutions, they do not perform satisfactorily due to their iterative approach. That is, it is not guaranteed that different solutions are obtained even with different starting points in multiple runs of the algorithm. However, evolutionary algorithms are able to obtain multiple solutions and have become a popular technique for cases in which more than one solution is desired.
As far as the engineer is concerned, there are two important objectives that optimization can help with:
- Code tuning and validation
For example, to reproduce previous computations or experiments of interest.
- Meeting design specifications and constraints by allowing you to choose the ”best” set of models, parameters, and geometry for the current application.
For example, what shape maximizes heat flux but is constrained to have a fixed volume.
When you use optimization in flow and heat transfer applications, address the following points:
- Decide on the objective function (also known as a cost function or utility function) that you are interested in minimizing or maximizing.
For example, minimize the maximum surface temperature, minimize the downstream vorticity after a pipe bend or junction, or maximize the surface heat flux.
- Decide on the set of models and parameters that are important and need to be considered in the search for objective function extrema.
For example, material properties, length of a boundary, surface shape, domain size, and boundary conditions.
It is useful to perform a one-parameter-at-a-time sensitivity study over the input parameter space, and identify any parameters that have a significant impact on the solution (these parameters are sometimes referred to as ”big knobs”). You can measure sensitivity by monitoring changes in the objective function value, looking at its partial derivatives, or by using linear regression. However, this approach does not fully explore the input space, since it does not take into account the simultaneous variation of input variables, and therefore cannot detect input variable interactions. Modelers frequently prefer this technique because of its simplicity and because when the model fails the modeler knows immediately which input parameter caused the failure.
- The mathematical techniques that are used to solve optimization problems depend on the form of the objective function and constraints, as well as the desired use of the results (for example, code tuning or design).
You must decide which optimization technique best fits the current problem: is finding a local solution good enough, or must you find the global solution, or do you require multiple solutions?
The simplest situations are unconstrained problems, or problems in which all the constraints can be expressed as equality relationships. For these types of problems, the technique of Lagrangian multipliers can be used efficiently to find the optimal solution.
Another technique that is frequently used is response surface analysis. In this technique, you define a finite set of plausible parameter value collections (for example, based on uniform latin hypercube sampling) using the truncated input parameter space (obtained from, for example, the one-parameter-at-a-time sensitivity analysis), and then fit a response surface (either local or global) to the objective function results. Once the response surface has been created, it is easy to use calculus to define a system whose solutions represent the minimum or maximum of the objective function over the truncated input parameter space considered. However, solving this system is not always easy. This technique works well when the objective function is locally or globally smooth with respect to input parameter space. This smoothness makes it possible to get a good representation of the solution hyper-surface with the least number of objective function values.
For problems that involve inequality relationships, the Lagrangian multiplier method is not suited to solving problems of this type efficiently. Modern mathematical programming techniques, such as linear- and nonlinear-programming, in which both the objective and constraint can be linear or non-linear functions, must be used to find optimal solutions.